diff --git a/doc/source/index.rst b/doc/source/index.rst index 1de4c42eac3..ade6e410079 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -39,6 +39,14 @@ The `Neutron Development wiki`_ is also a good resource for new contributors. Enjoy! +Installation Guide +------------------ + +.. toctree:: + :maxdepth: 2 + + Installation Guide + Networking Guide ---------------- diff --git a/doc/source/install/common/get-started-networking.rst b/doc/source/install/common/get-started-networking.rst new file mode 100644 index 00000000000..61e0767ec36 --- /dev/null +++ b/doc/source/install/common/get-started-networking.rst @@ -0,0 +1,33 @@ +=========================== +Networking service overview +=========================== + +OpenStack Networking (neutron) allows you to create and attach interface +devices managed by other OpenStack services to networks. Plug-ins can be +implemented to accommodate different networking equipment and software, +providing flexibility to OpenStack architecture and deployment. + +It includes the following components: + +neutron-server + Accepts and routes API requests to the appropriate OpenStack + Networking plug-in for action. + +OpenStack Networking plug-ins and agents + Plug and unplug ports, create networks or subnets, and provide + IP addressing. These plug-ins and agents differ depending on the + vendor and technologies used in the particular cloud. OpenStack + Networking ships with plug-ins and agents for Cisco virtual and + physical switches, NEC OpenFlow products, Open vSwitch, Linux + bridging, and the VMware NSX product. + + The common agents are L3 (layer 3), DHCP (dynamic host IP + addressing), and a plug-in agent. + +Messaging queue + Used by most OpenStack Networking installations to route information + between the neutron-server and various agents. Also acts as a database + to store networking state for particular plug-ins. + +OpenStack Networking mainly interacts with OpenStack Compute to provide +networks and connectivity for its instances. diff --git a/doc/source/install/compute-install-obs.rst b/doc/source/install/compute-install-obs.rst new file mode 100644 index 00000000000..5ed39b3252b --- /dev/null +++ b/doc/source/install/compute-install-obs.rst @@ -0,0 +1,160 @@ +Install and configure compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The compute node handles connectivity and security groups for instances. + + + + +Install the components +---------------------- + +.. code-block:: console + + # zypper install --no-recommends \ + openstack-neutron-linuxbridge-agent bridge-utils + +.. end + + +Configure the common component +------------------------------ + +The Networking common component configuration includes the +authentication mechanism, message queue, and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, comment out any ``connection`` options + because compute nodes do not directly access the database. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + + +Configure networking options +---------------------------- + +Choose the same networking option that you chose for the controller node to +configure services specific to it. Afterwards, return here and proceed to +:ref:`neutron-compute-compute-obs`. + +.. toctree:: + :maxdepth: 1 + + compute-install-option1-obs.rst + compute-install-option2-obs.rst + +.. _neutron-compute-compute-obs: + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[neutron]`` section, configure access parameters: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + +Finalize installation +--------------------- + + + +#. The Networking service initialization scripts expect the variable + ``NEUTRON_PLUGIN_CONF`` in the ``/etc/sysconfig/neutron`` file to + reference the ML2 plug-in configuration file. Ensure that the + ``/etc/sysconfig/neutron`` file contains the following: + + .. path /etc/sysconfig/neutron + .. code-block:: ini + + NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" + + .. end + +#. Restart the Compute service: + + .. code-block:: console + + # systemctl restart openstack-nova-compute.service + + .. end + +#. Start the Linux Bridge agent and configure it to start when the + system boots: + + .. code-block:: console + + # systemctl enable openstack-neutron-linuxbridge-agent.service + # systemctl start openstack-neutron-linuxbridge-agent.service + + .. end + + diff --git a/doc/source/install/compute-install-option1-obs.rst b/doc/source/install/compute-install-option1-obs.rst new file mode 100644 index 00000000000..b938cf2aedb --- /dev/null +++ b/doc/source/install/compute-install-option1-obs.rst @@ -0,0 +1,53 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-obs` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration* diff --git a/doc/source/install/compute-install-option1-rdo.rst b/doc/source/install/compute-install-option1-rdo.rst new file mode 100644 index 00000000000..1744b224c0f --- /dev/null +++ b/doc/source/install/compute-install-option1-rdo.rst @@ -0,0 +1,53 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-rdo` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration* diff --git a/doc/source/install/compute-install-option1-ubuntu.rst b/doc/source/install/compute-install-option1-ubuntu.rst new file mode 100644 index 00000000000..cd65201ee9c --- /dev/null +++ b/doc/source/install/compute-install-option1-ubuntu.rst @@ -0,0 +1,53 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-ubuntu` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration* diff --git a/doc/source/install/compute-install-option2-obs.rst b/doc/source/install/compute-install-option2-obs.rst new file mode 100644 index 00000000000..9a0621887ee --- /dev/null +++ b/doc/source/install/compute-install-option2-obs.rst @@ -0,0 +1,64 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-obs` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the compute node. See + :doc:`environment-networking-obs` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration*. diff --git a/doc/source/install/compute-install-option2-rdo.rst b/doc/source/install/compute-install-option2-rdo.rst new file mode 100644 index 00000000000..3ca09344d66 --- /dev/null +++ b/doc/source/install/compute-install-option2-rdo.rst @@ -0,0 +1,64 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-rdo` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the compute node. See + :doc:`environment-networking-rdo` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration*. diff --git a/doc/source/install/compute-install-option2-ubuntu.rst b/doc/source/install/compute-install-option2-ubuntu.rst new file mode 100644 index 00000000000..6794d0357ce --- /dev/null +++ b/doc/source/install/compute-install-option2-ubuntu.rst @@ -0,0 +1,64 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the Networking components on a *compute* node. + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-ubuntu` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the compute node. See + :doc:`environment-networking-ubuntu` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Return to *Networking compute node configuration*. diff --git a/doc/source/install/compute-install-rdo.rst b/doc/source/install/compute-install-rdo.rst new file mode 100644 index 00000000000..1e651ed55f4 --- /dev/null +++ b/doc/source/install/compute-install-rdo.rst @@ -0,0 +1,163 @@ +Install and configure compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The compute node handles connectivity and security groups for instances. + + + +Install the components +---------------------- + +.. todo: + + https://bugzilla.redhat.com/show_bug.cgi?id=1334626 + +.. code-block:: console + + # yum install openstack-neutron-linuxbridge ebtables ipset + +.. end + + + +Configure the common component +------------------------------ + +The Networking common component configuration includes the +authentication mechanism, message queue, and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, comment out any ``connection`` options + because compute nodes do not directly access the database. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + +* In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/neutron/tmp + + .. end + + + +Configure networking options +---------------------------- + +Choose the same networking option that you chose for the controller node to +configure services specific to it. Afterwards, return here and proceed to +:ref:`neutron-compute-compute-rdo`. + +.. toctree:: + :maxdepth: 1 + + compute-install-option1-rdo.rst + compute-install-option2-rdo.rst + +.. _neutron-compute-compute-rdo: + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[neutron]`` section, configure access parameters: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + +Finalize installation +--------------------- + + +#. Restart the Compute service: + + .. code-block:: console + + # systemctl restart openstack-nova-compute.service + + .. end + +#. Start the Linux bridge agent and configure it to start when the + system boots: + + .. code-block:: console + + # systemctl enable neutron-linuxbridge-agent.service + # systemctl start neutron-linuxbridge-agent.service + + .. end + + + diff --git a/doc/source/install/compute-install-ubuntu.rst b/doc/source/install/compute-install-ubuntu.rst new file mode 100644 index 00000000000..b5586735136 --- /dev/null +++ b/doc/source/install/compute-install-ubuntu.rst @@ -0,0 +1,145 @@ +Install and configure compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The compute node handles connectivity and security groups for instances. + + +Install the components +---------------------- + +.. code-block:: console + + # apt install neutron-linuxbridge-agent + +.. end + + + + +Configure the common component +------------------------------ + +The Networking common component configuration includes the +authentication mechanism, message queue, and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, comment out any ``connection`` options + because compute nodes do not directly access the database. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + + +Configure networking options +---------------------------- + +Choose the same networking option that you chose for the controller node to +configure services specific to it. Afterwards, return here and proceed to +:ref:`neutron-compute-compute-ubuntu`. + +.. toctree:: + :maxdepth: 1 + + compute-install-option1-ubuntu.rst + compute-install-option2-ubuntu.rst + +.. _neutron-compute-compute-ubuntu: + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[neutron]`` section, configure access parameters: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + +Finalize installation +--------------------- + + + + +#. Restart the Compute service: + + .. code-block:: console + + # service nova-compute restart + + .. end + +#. Restart the Linux bridge agent: + + .. code-block:: console + + # service neutron-linuxbridge-agent restart + + .. end + diff --git a/doc/source/install/concepts.rst b/doc/source/install/concepts.rst new file mode 100644 index 00000000000..de65c8d4d22 --- /dev/null +++ b/doc/source/install/concepts.rst @@ -0,0 +1,53 @@ +Networking (neutron) concepts +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +OpenStack Networking (neutron) manages all networking facets for the +Virtual Networking Infrastructure (VNI) and the access layer aspects +of the Physical Networking Infrastructure (PNI) in your OpenStack +environment. OpenStack Networking enables projects to create advanced +virtual network topologies which may include services such as a +firewall, a load balancer, and a virtual private network (VPN). + +Networking provides networks, subnets, and routers as object abstractions. +Each abstraction has functionality that mimics its physical counterpart: +networks contain subnets, and routers route traffic between different +subnets and networks. + +Any given Networking set up has at least one external network. Unlike +the other networks, the external network is not merely a virtually +defined network. Instead, it represents a view into a slice of the +physical, external network accessible outside the OpenStack +installation. IP addresses on the external network are accessible by +anybody physically on the outside network. + +In addition to external networks, any Networking set up has one or more +internal networks. These software-defined networks connect directly to +the VMs. Only the VMs on any given internal network, or those on subnets +connected through interfaces to a similar router, can access VMs connected +to that network directly. + +For the outside network to access VMs, and vice versa, routers between +the networks are needed. Each router has one gateway that is connected +to an external network and one or more interfaces connected to internal +networks. Like a physical router, subnets can access machines on other +subnets that are connected to the same router, and machines can access the +outside network through the gateway for the router. + +Additionally, you can allocate IP addresses on external networks to +ports on the internal network. Whenever something is connected to a +subnet, that connection is called a port. You can associate external +network IP addresses with ports to VMs. This way, entities on the +outside network can access VMs. + +Networking also supports *security groups*. Security groups enable +administrators to define firewall rules in groups. A VM can belong to +one or more security groups, and Networking applies the rules in those +security groups to block or unblock ports, port ranges, or traffic types +for that VM. + +Each plug-in that Networking uses has its own concepts. While not vital +to operating the VNI and OpenStack environment, understanding these +concepts can help you set up Networking. All Networking installations +use a core plug-in and a security group plug-in (or just the No-Op +security group plug-in). Additionally, Firewall-as-a-Service (FWaaS) and +Load-Balancer-as-a-Service (LBaaS) plug-ins are available. diff --git a/doc/source/install/controller-install-obs.rst b/doc/source/install/controller-install-obs.rst new file mode 100644 index 00000000000..d9f6823e081 --- /dev/null +++ b/doc/source/install/controller-install-obs.rst @@ -0,0 +1,319 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Prerequisites +------------- + +Before you configure the OpenStack Networking (neutron) service, you +must create a database, service credentials, and API endpoints. + +#. To create the database, complete these steps: + + + +* Use the database access client to connect to the database + server as the ``root`` user: + + .. code-block:: console + + $ mysql -u root -p + + .. end + + + * Create the ``neutron`` database: + + .. code-block:: console + + MariaDB [(none)] CREATE DATABASE neutron; + + .. end + + * Grant proper access to the ``neutron`` database, replacing + ``NEUTRON_DBPASS`` with a suitable password: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + + .. end + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI + commands: + + .. code-block:: console + + $ . admin-openrc + + .. end + +#. To create the service credentials, complete these steps: + + * Create the ``neutron`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt neutron + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fdb0f541e28141719b6a43c8944bf1fb | + | name | neutron | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + .. end + + * Add the ``admin`` role to the ``neutron`` user: + + .. code-block:: console + + $ openstack role add --project service --user neutron admin + + .. end + + .. note:: + + This command provides no output. + + * Create the ``neutron`` service entity: + + .. code-block:: console + + $ openstack service create --name neutron \ + --description "OpenStack Networking" network + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Networking | + | enabled | True | + | id | f71529314dab4a4d8eca427e701d209e | + | name | neutron | + | type | network | + +-------------+----------------------------------+ + + .. end + +#. Create the Networking service API endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + network public http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 85d80a6d02fc4b7683f611d7fc1493a3 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network internal http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 09753b537ac74422a68d2d791cf3714f | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network admin http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 1ee14289c9374dffb5db92a5c112fc4e | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + .. end + +Configure networking options +---------------------------- + +You can deploy the Networking service using one of two architectures +represented by options 1 and 2. + +Option 1 deploys the simplest possible architecture that only supports +attaching instances to provider (external) networks. No self-service (private) +networks, routers, or floating IP addresses. Only the ``admin`` or other +privileged user can manage provider networks. + +Option 2 augments option 1 with layer-3 services that support attaching +instances to self-service networks. The ``demo`` or other unprivileged +user can manage self-service networks including routers that provide +connectivity between self-service and provider networks. Additionally, +floating IP addresses provide connectivity to instances using self-service +networks from external networks such as the Internet. + +Self-service networks typically use overlay networks. Overlay network +protocols such as VXLAN include additional headers that increase overhead +and decrease space available for the payload or user data. Without knowledge +of the virtual network infrastructure, instances attempt to send packets +using the default Ethernet maximum transmission unit (MTU) of 1500 +bytes. The Networking service automatically provides the correct MTU value +to instances via DHCP. However, some cloud images do not use DHCP or ignore +the DHCP MTU option and require configuration using metadata or a script. + +.. note:: + + Option 2 also supports attaching instances to provider networks. + +Choose one of the following networking options to configure services +specific to it. Afterwards, return here and proceed to +:ref:`neutron-controller-metadata-agent-obs`. + +.. toctree:: + :maxdepth: 1 + + controller-install-option1-obs.rst + controller-install-option2-obs.rst + +.. _neutron-controller-metadata-agent-obs: + +Configure the metadata agent +---------------------------- + +The metadata agent provides configuration information +such as credentials to instances. + +* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the metadata host and shared + secret: + + .. path /etc/neutron/metadata_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + nova_metadata_ip = controller + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: + + * In the ``[neutron]`` section, configure access parameters, enable the + metadata proxy, and configure the secret: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + service_metadata_proxy = true + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + Replace ``METADATA_SECRET`` with the secret you chose for the metadata + proxy. + +Finalize installation +--------------------- + + + +.. note:: + + SLES enables apparmor by default and restricts dnsmasq. You need to + either completely disable apparmor or disable only the dnsmasq + profile: + + .. code-block:: console + + # ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/ + # systemctl restart apparmor + + .. end + +#. Restart the Compute API service: + + .. code-block:: console + + # systemctl restart openstack-nova-api.service + + .. end + +#. Start the Networking services and configure them to start when the system + boots. + + For both networking options: + + .. code-block:: console + + # systemctl enable openstack-neutron.service \ + openstack-neutron-linuxbridge-agent.service \ + openstack-neutron-dhcp-agent.service \ + openstack-neutron-metadata-agent.service + # systemctl start openstack-neutron.service \ + openstack-neutron-linuxbridge-agent.service \ + openstack-neutron-dhcp-agent.service \ + openstack-neutron-metadata-agent.service + + .. end + + For networking option 2, also enable and start the layer-3 service: + + .. code-block:: console + + # systemctl enable openstack-neutron-l3-agent.service + # systemctl start openstack-neutron-l3-agent.service + + .. end + + diff --git a/doc/source/install/controller-install-option1-obs.rst b/doc/source/install/controller-install-option1-obs.rst new file mode 100644 index 00000000000..a630a11e651 --- /dev/null +++ b/doc/source/install/controller-install-option1-obs.rst @@ -0,0 +1,289 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + +.. code-block:: console + + # zypper install --no-recommends openstack-neutron \ + openstack-neutron-server openstack-neutron-linuxbridge-agent \ + openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \ + bridge-utils + +.. end + +Configure the server component +------------------------------ + +The Networking server component configuration includes the database, +authentication mechanism, message queue, topology change notifications, +and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in and disable additional plug-ins: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat and VLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan + + .. end + + * In the ``[ml2]`` section, disable self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge mechanism: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-obs` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-option1-rdo.rst b/doc/source/install/controller-install-option1-rdo.rst new file mode 100644 index 00000000000..95e52cdd928 --- /dev/null +++ b/doc/source/install/controller-install-option1-rdo.rst @@ -0,0 +1,299 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + +.. code-block:: console + + # yum install openstack-neutron openstack-neutron-ml2 \ + openstack-neutron-linuxbridge ebtables + +.. end + +Configure the server component +------------------------------ + +The Networking server component configuration includes the database, +authentication mechanism, message queue, topology change notifications, +and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in and disable additional plug-ins: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + + +* In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/neutron/tmp + + .. end + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat and VLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan + + .. end + + * In the ``[ml2]`` section, disable self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge mechanism: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-rdo` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-option1-ubuntu.rst b/doc/source/install/controller-install-option1-ubuntu.rst new file mode 100644 index 00000000000..938b50083b0 --- /dev/null +++ b/doc/source/install/controller-install-option1-ubuntu.rst @@ -0,0 +1,288 @@ +Networking Option 1: Provider networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + +.. code-block:: console + + # apt install neutron-server neutron-plugin-ml2 \ + neutron-linuxbridge-agent neutron-dhcp-agent \ + neutron-metadata-agent + +.. end + +Configure the server component +------------------------------ + +The Networking server component configuration includes the database, +authentication mechanism, message queue, topology change notifications, +and plug-in. + +.. include:: shared/note_configuration_vary_by_distribution.rst + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in and disable additional plug-ins: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat and VLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan + + .. end + + * In the ``[ml2]`` section, disable self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge mechanism: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-ubuntu` + for more information. + + * In the ``[vxlan]`` section, disable VXLAN overlay networks: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = false + + .. end + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-option2-obs.rst b/doc/source/install/controller-install-option2-obs.rst new file mode 100644 index 00000000000..8bf6745c268 --- /dev/null +++ b/doc/source/install/controller-install-option2-obs.rst @@ -0,0 +1,337 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + + + +.. code-block:: console + + # zypper install --no-recommends openstack-neutron \ + openstack-neutron-server openstack-neutron-linuxbridge-agent \ + openstack-neutron-l3-agent openstack-neutron-dhcp-agent \ + openstack-neutron-metadata-agent bridge-utils + +.. end + + + +Configure the server component +------------------------------ + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in, router service, and overlapping IP addresses: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = router + allow_overlapping_ips = true + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan,vxlan + + .. end + + * In the ``[ml2]`` section, enable VXLAN self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = vxlan + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population + mechanisms: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge,l2population + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + .. note:: + + The Linux bridge agent only supports VXLAN overlay networks. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier + range for self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_vxlan] + # ... + vni_ranges = 1:1000 + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-obs` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the controller node. See + :doc:`environment-networking-obs` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the layer-3 agent +--------------------------- + +The Layer-3 (L3) agent provides routing and NAT services for +self-service virtual networks. + +* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver + and external network bridge: + + .. path /etc/neutron/l3_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-option2-rdo.rst b/doc/source/install/controller-install-option2-rdo.rst new file mode 100644 index 00000000000..41b65c7893e --- /dev/null +++ b/doc/source/install/controller-install-option2-rdo.rst @@ -0,0 +1,347 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + + +.. code-block:: console + + # yum install openstack-neutron openstack-neutron-ml2 \ + openstack-neutron-linuxbridge ebtables + +.. end + + + + +Configure the server component +------------------------------ + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in, router service, and overlapping IP addresses: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = router + allow_overlapping_ips = true + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + + +* In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/neutron/tmp + + .. end + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan,vxlan + + .. end + + * In the ``[ml2]`` section, enable VXLAN self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = vxlan + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population + mechanisms: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge,l2population + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + .. note:: + + The Linux bridge agent only supports VXLAN overlay networks. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier + range for self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_vxlan] + # ... + vni_ranges = 1:1000 + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-rdo` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the controller node. See + :doc:`environment-networking-rdo` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the layer-3 agent +--------------------------- + +The Layer-3 (L3) agent provides routing and NAT services for +self-service virtual networks. + +* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver + and external network bridge: + + .. path /etc/neutron/l3_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-option2-ubuntu.rst b/doc/source/install/controller-install-option2-ubuntu.rst new file mode 100644 index 00000000000..71d228d5676 --- /dev/null +++ b/doc/source/install/controller-install-option2-ubuntu.rst @@ -0,0 +1,336 @@ +Networking Option 2: Self-service networks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Install and configure the Networking components on the *controller* node. + +Install the components +---------------------- + + +.. code-block:: console + + # apt install neutron-server neutron-plugin-ml2 \ + neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \ + neutron-metadata-agent + +.. end + + + + + +Configure the server component +------------------------------ + +* Edit the ``/etc/neutron/neutron.conf`` file and complete the following + actions: + + * In the ``[database]`` section, configure database access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [database] + # ... + connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron + + .. end + + Replace ``NEUTRON_DBPASS`` with the password you chose for the + database. + + .. note:: + + Comment out or remove any other ``connection`` options in the + ``[database]`` section. + + * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) + plug-in, router service, and overlapping IP addresses: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + core_plugin = ml2 + service_plugins = router + allow_overlapping_ips = true + + .. end + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` + message queue access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + .. end + + Replace ``RABBIT_PASS`` with the password you chose for the + ``openstack`` account in RabbitMQ. + + * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure + Identity service access: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = neutron + password = NEUTRON_PASS + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to + notify Compute of network topology changes: + + .. path /etc/neutron/neutron.conf + .. code-block:: ini + + [DEFAULT] + # ... + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [nova] + # ... + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = nova + password = NOVA_PASS + + .. end + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` + user in the Identity service. + +Configure the Modular Layer 2 (ML2) plug-in +------------------------------------------- + +The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging +and switching) virtual networking infrastructure for instances. + +* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the + following actions: + + * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + type_drivers = flat,vlan,vxlan + + .. end + + * In the ``[ml2]`` section, enable VXLAN self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + tenant_network_types = vxlan + + .. end + + * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population + mechanisms: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + mechanism_drivers = linuxbridge,l2population + + .. end + + .. warning:: + + After you configure the ML2 plug-in, removing values in the + ``type_drivers`` option can lead to database inconsistency. + + .. note:: + + The Linux bridge agent only supports VXLAN overlay networks. + + * In the ``[ml2]`` section, enable the port security extension driver: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2] + # ... + extension_drivers = port_security + + .. end + + * In the ``[ml2_type_flat]`` section, configure the provider virtual + network as a flat network: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_flat] + # ... + flat_networks = provider + + .. end + + * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier + range for self-service networks: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [ml2_type_vxlan] + # ... + vni_ranges = 1:1000 + + .. end + + * In the ``[securitygroup]`` section, enable ipset to increase + efficiency of security group rules: + + .. path /etc/neutron/plugins/ml2/ml2_conf.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_ipset = true + + .. end + +Configure the Linux bridge agent +-------------------------------- + +The Linux bridge agent builds layer-2 (bridging and switching) virtual +networking infrastructure for instances and handles security groups. + +* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and + complete the following actions: + + * In the ``[linux_bridge]`` section, map the provider virtual network to the + provider physical network interface: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME + + .. end + + Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying + provider physical network interface. See :doc:`environment-networking-ubuntu` + for more information. + + * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the + IP address of the physical network interface that handles overlay + networks, and enable layer-2 population: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [vxlan] + enable_vxlan = true + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + l2_population = true + + .. end + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + underlying physical network interface that handles overlay networks. The + example architecture uses the management interface to tunnel traffic to + the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with + the management IP address of the controller node. See + :doc:`environment-networking-ubuntu` for more information. + + * In the ``[securitygroup]`` section, enable security groups and + configure the Linux bridge iptables firewall driver: + + .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini + .. code-block:: ini + + [securitygroup] + # ... + enable_security_group = true + firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver + + .. end + +Configure the layer-3 agent +--------------------------- + +The Layer-3 (L3) agent provides routing and NAT services for +self-service virtual networks. + +* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver + and external network bridge: + + .. path /etc/neutron/l3_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + + .. end + +Configure the DHCP agent +------------------------ + +The DHCP agent provides DHCP services for virtual networks. + +* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, + Dnsmasq DHCP driver, and enable isolated metadata so instances on provider + networks can access metadata over the network: + + .. path /etc/neutron/dhcp_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + interface_driver = linuxbridge + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + enable_isolated_metadata = true + + .. end + +Return to *Networking controller node configuration*. diff --git a/doc/source/install/controller-install-rdo.rst b/doc/source/install/controller-install-rdo.rst new file mode 100644 index 00000000000..bfc0368d4d0 --- /dev/null +++ b/doc/source/install/controller-install-rdo.rst @@ -0,0 +1,329 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Prerequisites +------------- + +Before you configure the OpenStack Networking (neutron) service, you +must create a database, service credentials, and API endpoints. + +#. To create the database, complete these steps: + + + +* Use the database access client to connect to the database + server as the ``root`` user: + + .. code-block:: console + + $ mysql -u root -p + + .. end + + + * Create the ``neutron`` database: + + .. code-block:: console + + MariaDB [(none)] CREATE DATABASE neutron; + + .. end + + * Grant proper access to the ``neutron`` database, replacing + ``NEUTRON_DBPASS`` with a suitable password: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + + .. end + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI + commands: + + .. code-block:: console + + $ . admin-openrc + + .. end + +#. To create the service credentials, complete these steps: + + * Create the ``neutron`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt neutron + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fdb0f541e28141719b6a43c8944bf1fb | + | name | neutron | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + .. end + + * Add the ``admin`` role to the ``neutron`` user: + + .. code-block:: console + + $ openstack role add --project service --user neutron admin + + .. end + + .. note:: + + This command provides no output. + + * Create the ``neutron`` service entity: + + .. code-block:: console + + $ openstack service create --name neutron \ + --description "OpenStack Networking" network + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Networking | + | enabled | True | + | id | f71529314dab4a4d8eca427e701d209e | + | name | neutron | + | type | network | + +-------------+----------------------------------+ + + .. end + +#. Create the Networking service API endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + network public http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 85d80a6d02fc4b7683f611d7fc1493a3 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network internal http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 09753b537ac74422a68d2d791cf3714f | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network admin http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 1ee14289c9374dffb5db92a5c112fc4e | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + .. end + +Configure networking options +---------------------------- + +You can deploy the Networking service using one of two architectures +represented by options 1 and 2. + +Option 1 deploys the simplest possible architecture that only supports +attaching instances to provider (external) networks. No self-service (private) +networks, routers, or floating IP addresses. Only the ``admin`` or other +privileged user can manage provider networks. + +Option 2 augments option 1 with layer-3 services that support attaching +instances to self-service networks. The ``demo`` or other unprivileged +user can manage self-service networks including routers that provide +connectivity between self-service and provider networks. Additionally, +floating IP addresses provide connectivity to instances using self-service +networks from external networks such as the Internet. + +Self-service networks typically use overlay networks. Overlay network +protocols such as VXLAN include additional headers that increase overhead +and decrease space available for the payload or user data. Without knowledge +of the virtual network infrastructure, instances attempt to send packets +using the default Ethernet maximum transmission unit (MTU) of 1500 +bytes. The Networking service automatically provides the correct MTU value +to instances via DHCP. However, some cloud images do not use DHCP or ignore +the DHCP MTU option and require configuration using metadata or a script. + +.. note:: + + Option 2 also supports attaching instances to provider networks. + +Choose one of the following networking options to configure services +specific to it. Afterwards, return here and proceed to +:ref:`neutron-controller-metadata-agent-rdo`. + +.. toctree:: + :maxdepth: 1 + + controller-install-option1-rdo.rst + controller-install-option2-rdo.rst + +.. _neutron-controller-metadata-agent-rdo: + +Configure the metadata agent +---------------------------- + +The metadata agent provides configuration information +such as credentials to instances. + +* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the metadata host and shared + secret: + + .. path /etc/neutron/metadata_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + nova_metadata_ip = controller + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: + + * In the ``[neutron]`` section, configure access parameters, enable the + metadata proxy, and configure the secret: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + service_metadata_proxy = true + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + Replace ``METADATA_SECRET`` with the secret you chose for the metadata + proxy. + +Finalize installation +--------------------- + + +#. The Networking service initialization scripts expect a symbolic link + ``/etc/neutron/plugin.ini`` pointing to the ML2 plug-in configuration + file, ``/etc/neutron/plugins/ml2/ml2_conf.ini``. If this symbolic + link does not exist, create it using the following command: + + .. code-block:: console + + # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini + + .. end + +#. Populate the database: + + .. code-block:: console + + # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron + + .. end + + .. note:: + + Database population occurs later for Networking because the script + requires complete server and plug-in configuration files. + +#. Restart the Compute API service: + + .. code-block:: console + + # systemctl restart openstack-nova-api.service + + .. end + +#. Start the Networking services and configure them to start when the system + boots. + + For both networking options: + + .. code-block:: console + + # systemctl enable neutron-server.service \ + neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ + neutron-metadata-agent.service + # systemctl start neutron-server.service \ + neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ + neutron-metadata-agent.service + + .. end + + For networking option 2, also enable and start the layer-3 service: + + .. code-block:: console + + # systemctl enable neutron-l3-agent.service + # systemctl start neutron-l3-agent.service + + .. end + + + diff --git a/doc/source/install/controller-install-ubuntu.rst b/doc/source/install/controller-install-ubuntu.rst new file mode 100644 index 00000000000..8ec24ee6761 --- /dev/null +++ b/doc/source/install/controller-install-ubuntu.rst @@ -0,0 +1,314 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Prerequisites +------------- + +Before you configure the OpenStack Networking (neutron) service, you +must create a database, service credentials, and API endpoints. + +#. To create the database, complete these steps: + + +* Use the database access client to connect to the database + server as the ``root`` user: + + .. code-block:: console + + # mysql + + .. end + + + + * Create the ``neutron`` database: + + .. code-block:: console + + MariaDB [(none)] CREATE DATABASE neutron; + + .. end + + * Grant proper access to the ``neutron`` database, replacing + ``NEUTRON_DBPASS`` with a suitable password: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ + IDENTIFIED BY 'NEUTRON_DBPASS'; + + .. end + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI + commands: + + .. code-block:: console + + $ . admin-openrc + + .. end + +#. To create the service credentials, complete these steps: + + * Create the ``neutron`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt neutron + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fdb0f541e28141719b6a43c8944bf1fb | + | name | neutron | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + .. end + + * Add the ``admin`` role to the ``neutron`` user: + + .. code-block:: console + + $ openstack role add --project service --user neutron admin + + .. end + + .. note:: + + This command provides no output. + + * Create the ``neutron`` service entity: + + .. code-block:: console + + $ openstack service create --name neutron \ + --description "OpenStack Networking" network + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Networking | + | enabled | True | + | id | f71529314dab4a4d8eca427e701d209e | + | name | neutron | + | type | network | + +-------------+----------------------------------+ + + .. end + +#. Create the Networking service API endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + network public http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 85d80a6d02fc4b7683f611d7fc1493a3 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network internal http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 09753b537ac74422a68d2d791cf3714f | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne \ + network admin http://controller:9696 + + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 1ee14289c9374dffb5db92a5c112fc4e | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | f71529314dab4a4d8eca427e701d209e | + | service_name | neutron | + | service_type | network | + | url | http://controller:9696 | + +--------------+----------------------------------+ + + .. end + +Configure networking options +---------------------------- + +You can deploy the Networking service using one of two architectures +represented by options 1 and 2. + +Option 1 deploys the simplest possible architecture that only supports +attaching instances to provider (external) networks. No self-service (private) +networks, routers, or floating IP addresses. Only the ``admin`` or other +privileged user can manage provider networks. + +Option 2 augments option 1 with layer-3 services that support attaching +instances to self-service networks. The ``demo`` or other unprivileged +user can manage self-service networks including routers that provide +connectivity between self-service and provider networks. Additionally, +floating IP addresses provide connectivity to instances using self-service +networks from external networks such as the Internet. + +Self-service networks typically use overlay networks. Overlay network +protocols such as VXLAN include additional headers that increase overhead +and decrease space available for the payload or user data. Without knowledge +of the virtual network infrastructure, instances attempt to send packets +using the default Ethernet maximum transmission unit (MTU) of 1500 +bytes. The Networking service automatically provides the correct MTU value +to instances via DHCP. However, some cloud images do not use DHCP or ignore +the DHCP MTU option and require configuration using metadata or a script. + +.. note:: + + Option 2 also supports attaching instances to provider networks. + +Choose one of the following networking options to configure services +specific to it. Afterwards, return here and proceed to +:ref:`neutron-controller-metadata-agent-ubuntu`. + +.. toctree:: + :maxdepth: 1 + + controller-install-option1-ubuntu.rst + controller-install-option2-ubuntu.rst + +.. _neutron-controller-metadata-agent-ubuntu: + +Configure the metadata agent +---------------------------- + +The metadata agent provides configuration information +such as credentials to instances. + +* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following + actions: + + * In the ``[DEFAULT]`` section, configure the metadata host and shared + secret: + + .. path /etc/neutron/metadata_agent.ini + .. code-block:: ini + + [DEFAULT] + # ... + nova_metadata_ip = controller + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. + +Configure the Compute service to use the Networking service +----------------------------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: + + * In the ``[neutron]`` section, configure access parameters, enable the + metadata proxy, and configure the secret: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [neutron] + # ... + url = http://controller:9696 + auth_url = http://controller:35357 + auth_type = password + project_domain_name = default + user_domain_name = default + region_name = RegionOne + project_name = service + username = neutron + password = NEUTRON_PASS + service_metadata_proxy = true + metadata_proxy_shared_secret = METADATA_SECRET + + .. end + + Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` + user in the Identity service. + + Replace ``METADATA_SECRET`` with the secret you chose for the metadata + proxy. + +Finalize installation +--------------------- + + + + +#. Populate the database: + + .. code-block:: console + + # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron + + .. end + + .. note:: + + Database population occurs later for Networking because the script + requires complete server and plug-in configuration files. + +#. Restart the Compute API service: + + .. code-block:: console + + # service nova-api restart + + .. end + +#. Restart the Networking services. + + For both networking options: + + .. code-block:: console + + # service neutron-server restart + # service neutron-linuxbridge-agent restart + # service neutron-dhcp-agent restart + # service neutron-metadata-agent restart + + .. end + + For networking option 2, also restart the layer-3 service: + + .. code-block:: console + + # service neutron-l3-agent restart + + .. end + diff --git a/doc/source/install/environment-networking-compute-obs.rst b/doc/source/install/environment-networking-compute-obs.rst new file mode 100644 index 00000000000..ac66fc32d7c --- /dev/null +++ b/doc/source/install/environment-networking-compute-obs.rst @@ -0,0 +1,48 @@ +Compute node +~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.31 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + + .. note:: + + Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on. + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + + + +* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to + contain the following: + + .. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME + .. code-block:: bash + + STARTMODE='auto' + BOOTPROTO='static' + + .. end + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``compute1``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-compute-rdo.rst b/doc/source/install/environment-networking-compute-rdo.rst new file mode 100644 index 00000000000..a7c1e6cbc2c --- /dev/null +++ b/doc/source/install/environment-networking-compute-rdo.rst @@ -0,0 +1,52 @@ +Compute node +~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.31 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + + .. note:: + + Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on. + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + + +* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file + to contain the following: + + Do not change the ``HWADDR`` and ``UUID`` keys. + + .. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME + .. code-block:: bash + + DEVICE=INTERFACE_NAME + TYPE=Ethernet + ONBOOT="yes" + BOOTPROTO="none" + + .. end + + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``compute1``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-compute-ubuntu.rst b/doc/source/install/environment-networking-compute-ubuntu.rst new file mode 100644 index 00000000000..dd01fd5d629 --- /dev/null +++ b/doc/source/install/environment-networking-compute-ubuntu.rst @@ -0,0 +1,50 @@ +Compute node +~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.31 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + + .. note:: + + Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on. + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + +* Edit the ``/etc/network/interfaces`` file to contain the following: + + .. path /etc/network/interfaces + .. code-block:: bash + + # The provider network interface + auto INTERFACE_NAME + iface INTERFACE_NAME inet manual + up ip link set dev $IFACE up + down ip link set dev $IFACE down + + .. end + + + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``compute1``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-controller-obs.rst b/doc/source/install/environment-networking-controller-obs.rst new file mode 100644 index 00000000000..3118969ec18 --- /dev/null +++ b/doc/source/install/environment-networking-controller-obs.rst @@ -0,0 +1,44 @@ +Controller node +~~~~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.11 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + + + +* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to + contain the following: + + .. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME + .. code-block:: ini + + STARTMODE='auto' + BOOTPROTO='static' + + .. end + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``controller``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-controller-rdo.rst b/doc/source/install/environment-networking-controller-rdo.rst new file mode 100644 index 00000000000..5f05c9adfdb --- /dev/null +++ b/doc/source/install/environment-networking-controller-rdo.rst @@ -0,0 +1,48 @@ +Controller node +~~~~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.11 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + + +* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file + to contain the following: + + Do not change the ``HWADDR`` and ``UUID`` keys. + + .. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME + .. code-block:: ini + + DEVICE=INTERFACE_NAME + TYPE=Ethernet + ONBOOT="yes" + BOOTPROTO="none" + + .. end + + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``controller``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-controller-ubuntu.rst b/doc/source/install/environment-networking-controller-ubuntu.rst new file mode 100644 index 00000000000..6b578edc962 --- /dev/null +++ b/doc/source/install/environment-networking-controller-ubuntu.rst @@ -0,0 +1,46 @@ +Controller node +~~~~~~~~~~~~~~~ + +Configure network interfaces +---------------------------- + +#. Configure the first interface as the management interface: + + IP address: 10.0.0.11 + + Network mask: 255.255.255.0 (or /24) + + Default gateway: 10.0.0.1 + +#. The provider interface uses a special configuration without an IP + address assigned to it. Configure the second interface as the provider + interface: + + Replace ``INTERFACE_NAME`` with the actual interface name. For example, + *eth1* or *ens224*. + + +* Edit the ``/etc/network/interfaces`` file to contain the following: + + .. path /etc/network/interfaces + .. code-block:: bash + + # The provider network interface + auto INTERFACE_NAME + iface INTERFACE_NAME inet manual + up ip link set dev $IFACE up + down ip link set dev $IFACE down + + .. end + + + + +#. Reboot the system to activate the changes. + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``controller``. + +#. .. include:: shared/edit_hosts_file.txt diff --git a/doc/source/install/environment-networking-obs.rst b/doc/source/install/environment-networking-obs.rst new file mode 100644 index 00000000000..f2737ea67e4 --- /dev/null +++ b/doc/source/install/environment-networking-obs.rst @@ -0,0 +1,91 @@ +Host networking +~~~~~~~~~~~~~~~ + +After installing the operating system on each node for the architecture +that you choose to deploy, you must configure the network interfaces. We +recommend that you disable any automated network management tools and +manually edit the appropriate configuration files for your distribution. +For more information on how to configure networking on your +distribution, see the `SLES 12 +`__ +or `openSUSE +`__ +documentation. + +All nodes require Internet access for administrative purposes such as package +installation, security updates, Domain Name System (DNS), and +Network Time Protocol (NTP). In most cases, nodes should obtain +Internet access through the management network interface. +To highlight the importance of network separation, the example architectures +use `private address space `__ for the +management network and assume that the physical network infrastructure +provides Internet access via Network Address Translation (NAT) +or other methods. The example architectures use routable IP address space for +the provider (external) network and assume that the physical network +infrastructure provides direct Internet access. + +In the provider networks architecture, all instances attach directly +to the provider network. In the self-service (private) networks architecture, +instances can attach to a self-service or provider network. Self-service +networks can reside entirely within OpenStack or provide some level of external +network access using Network Address Translation (NAT) through +the provider network. + +.. _figure-networklayout: + +.. figure:: figures/networklayout.png + :alt: Network layout + +The example architectures assume use of the following networks: + +* Management on 10.0.0.0/24 with gateway 10.0.0.1 + + This network requires a gateway to provide Internet access to all + nodes for administrative purposes such as package installation, + security updates, Domain Name System (DNS), and + Network Time Protocol (NTP). + +* Provider on 203.0.113.0/24 with gateway 203.0.113.1 + + This network requires a gateway to provide Internet access to + instances in your OpenStack environment. + +You can modify these ranges and gateways to work with your particular +network infrastructure. + +Network interface names vary by distribution. Traditionally, +interfaces use ``eth`` followed by a sequential number. To cover all +variations, this guide refers to the first interface as the +interface with the lowest number and the second interface as the +interface with the highest number. + +Unless you intend to use the exact configuration provided in this +example architecture, you must modify the networks in this procedure to +match your environment. Each node must resolve the other nodes by +name in addition to IP address. For example, the ``controller`` name must +resolve to ``10.0.0.11``, the IP address of the management interface on +the controller node. + +.. warning:: + + Reconfiguring network interfaces will interrupt network + connectivity. We recommend using a local terminal session for these + procedures. + +.. note:: + + Your distribution enables a restrictive firewall by + default. During the installation process, certain steps will fail + unless you alter or disable the firewall. For more information + about securing your environment, refer to the `OpenStack Security + Guide `_. + + + +.. toctree:: + :maxdepth: 1 + + environment-networking-controller-obs.rst + environment-networking-compute-obs.rst + environment-networking-storage-cinder.rst + environment-networking-verify-obs.rst diff --git a/doc/source/install/environment-networking-rdo.rst b/doc/source/install/environment-networking-rdo.rst new file mode 100644 index 00000000000..420417367bc --- /dev/null +++ b/doc/source/install/environment-networking-rdo.rst @@ -0,0 +1,88 @@ +Host networking +~~~~~~~~~~~~~~~ + +After installing the operating system on each node for the architecture +that you choose to deploy, you must configure the network interfaces. We +recommend that you disable any automated network management tools and +manually edit the appropriate configuration files for your distribution. +For more information on how to configure networking on your +distribution, see the `documentation +`__ . + +All nodes require Internet access for administrative purposes such as package +installation, security updates, Domain Name System (DNS), and +Network Time Protocol (NTP). In most cases, nodes should obtain +Internet access through the management network interface. +To highlight the importance of network separation, the example architectures +use `private address space `__ for the +management network and assume that the physical network infrastructure +provides Internet access via Network Address Translation (NAT) +or other methods. The example architectures use routable IP address space for +the provider (external) network and assume that the physical network +infrastructure provides direct Internet access. + +In the provider networks architecture, all instances attach directly +to the provider network. In the self-service (private) networks architecture, +instances can attach to a self-service or provider network. Self-service +networks can reside entirely within OpenStack or provide some level of external +network access using Network Address Translation (NAT) through +the provider network. + +.. _figure-networklayout: + +.. figure:: figures/networklayout.png + :alt: Network layout + +The example architectures assume use of the following networks: + +* Management on 10.0.0.0/24 with gateway 10.0.0.1 + + This network requires a gateway to provide Internet access to all + nodes for administrative purposes such as package installation, + security updates, Domain Name System (DNS), and + Network Time Protocol (NTP). + +* Provider on 203.0.113.0/24 with gateway 203.0.113.1 + + This network requires a gateway to provide Internet access to + instances in your OpenStack environment. + +You can modify these ranges and gateways to work with your particular +network infrastructure. + +Network interface names vary by distribution. Traditionally, +interfaces use ``eth`` followed by a sequential number. To cover all +variations, this guide refers to the first interface as the +interface with the lowest number and the second interface as the +interface with the highest number. + +Unless you intend to use the exact configuration provided in this +example architecture, you must modify the networks in this procedure to +match your environment. Each node must resolve the other nodes by +name in addition to IP address. For example, the ``controller`` name must +resolve to ``10.0.0.11``, the IP address of the management interface on +the controller node. + +.. warning:: + + Reconfiguring network interfaces will interrupt network + connectivity. We recommend using a local terminal session for these + procedures. + +.. note:: + + Your distribution enables a restrictive firewall by + default. During the installation process, certain steps will fail + unless you alter or disable the firewall. For more information + about securing your environment, refer to the `OpenStack Security + Guide `_. + + + +.. toctree:: + :maxdepth: 1 + + environment-networking-controller-rdo.rst + environment-networking-compute-rdo.rst + environment-networking-storage-cinder.rst + environment-networking-verify-rdo.rst diff --git a/doc/source/install/environment-networking-storage-cinder.rst b/doc/source/install/environment-networking-storage-cinder.rst new file mode 100644 index 00000000000..09c40d0d824 --- /dev/null +++ b/doc/source/install/environment-networking-storage-cinder.rst @@ -0,0 +1,25 @@ +Block storage node (Optional) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you want to deploy the Block Storage service, configure one +additional storage node. + +Configure network interfaces +---------------------------- + +* Configure the management interface: + + * IP address: ``10.0.0.41`` + + * Network mask: ``255.255.255.0`` (or ``/24``) + + * Default gateway: ``10.0.0.1`` + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``block1``. + +#. .. include:: shared/edit_hosts_file.txt + +#. Reboot the system to activate the changes. diff --git a/doc/source/install/environment-networking-ubuntu.rst b/doc/source/install/environment-networking-ubuntu.rst new file mode 100644 index 00000000000..3ecb912fbd1 --- /dev/null +++ b/doc/source/install/environment-networking-ubuntu.rst @@ -0,0 +1,85 @@ +Host networking +~~~~~~~~~~~~~~~ + +After installing the operating system on each node for the architecture +that you choose to deploy, you must configure the network interfaces. We +recommend that you disable any automated network management tools and +manually edit the appropriate configuration files for your distribution. +For more information on how to configure networking on your +distribution, see the `documentation `_. + +All nodes require Internet access for administrative purposes such as package +installation, security updates, Domain Name System (DNS), and +Network Time Protocol (NTP). In most cases, nodes should obtain +Internet access through the management network interface. +To highlight the importance of network separation, the example architectures +use `private address space `__ for the +management network and assume that the physical network infrastructure +provides Internet access via Network Address Translation (NAT) +or other methods. The example architectures use routable IP address space for +the provider (external) network and assume that the physical network +infrastructure provides direct Internet access. + +In the provider networks architecture, all instances attach directly +to the provider network. In the self-service (private) networks architecture, +instances can attach to a self-service or provider network. Self-service +networks can reside entirely within OpenStack or provide some level of external +network access using Network Address Translation (NAT) through +the provider network. + +.. _figure-networklayout: + +.. figure:: figures/networklayout.png + :alt: Network layout + +The example architectures assume use of the following networks: + +* Management on 10.0.0.0/24 with gateway 10.0.0.1 + + This network requires a gateway to provide Internet access to all + nodes for administrative purposes such as package installation, + security updates, Domain Name System (DNS), and + Network Time Protocol (NTP). + +* Provider on 203.0.113.0/24 with gateway 203.0.113.1 + + This network requires a gateway to provide Internet access to + instances in your OpenStack environment. + +You can modify these ranges and gateways to work with your particular +network infrastructure. + +Network interface names vary by distribution. Traditionally, +interfaces use ``eth`` followed by a sequential number. To cover all +variations, this guide refers to the first interface as the +interface with the lowest number and the second interface as the +interface with the highest number. + +Unless you intend to use the exact configuration provided in this +example architecture, you must modify the networks in this procedure to +match your environment. Each node must resolve the other nodes by +name in addition to IP address. For example, the ``controller`` name must +resolve to ``10.0.0.11``, the IP address of the management interface on +the controller node. + +.. warning:: + + Reconfiguring network interfaces will interrupt network + connectivity. We recommend using a local terminal session for these + procedures. + +.. note:: + + Your distribution does not enable a restrictive firewall by + default. For more information about securing your environment, + refer to the `OpenStack Security Guide + `_. + + +.. toctree:: + :maxdepth: 1 + + environment-networking-controller-ubuntu.rst + environment-networking-compute-ubuntu.rst + environment-networking-storage-cinder.rst + environment-networking-verify-ubuntu.rst diff --git a/doc/source/install/environment-networking-verify-obs.rst b/doc/source/install/environment-networking-verify-obs.rst new file mode 100644 index 00000000000..1ac9c896fd0 --- /dev/null +++ b/doc/source/install/environment-networking-verify-obs.rst @@ -0,0 +1,89 @@ +Verify connectivity +------------------- + +We recommend that you verify network connectivity to the Internet and +among the nodes before proceeding further. + +#. From the *controller* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *controller* node, test access to the management interface on the + *compute* node: + + .. code-block:: console + + # ping -c 4 compute1 + + PING compute1 (10.0.0.31) 56(84) bytes of data. + 64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms + + --- compute1 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +#. From the *compute* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *compute* node, test access to the management interface on the + *controller* node: + + .. code-block:: console + + # ping -c 4 controller + + PING controller (10.0.0.11) 56(84) bytes of data. + 64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + + --- controller ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +.. note:: + + Your distribution enables a restrictive firewall by + default. During the installation process, certain steps will fail + unless you alter or disable the firewall. For more information + about securing your environment, refer to the `OpenStack Security + Guide `_. + + diff --git a/doc/source/install/environment-networking-verify-rdo.rst b/doc/source/install/environment-networking-verify-rdo.rst new file mode 100644 index 00000000000..1ac9c896fd0 --- /dev/null +++ b/doc/source/install/environment-networking-verify-rdo.rst @@ -0,0 +1,89 @@ +Verify connectivity +------------------- + +We recommend that you verify network connectivity to the Internet and +among the nodes before proceeding further. + +#. From the *controller* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *controller* node, test access to the management interface on the + *compute* node: + + .. code-block:: console + + # ping -c 4 compute1 + + PING compute1 (10.0.0.31) 56(84) bytes of data. + 64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms + + --- compute1 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +#. From the *compute* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *compute* node, test access to the management interface on the + *controller* node: + + .. code-block:: console + + # ping -c 4 controller + + PING controller (10.0.0.11) 56(84) bytes of data. + 64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + + --- controller ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +.. note:: + + Your distribution enables a restrictive firewall by + default. During the installation process, certain steps will fail + unless you alter or disable the firewall. For more information + about securing your environment, refer to the `OpenStack Security + Guide `_. + + diff --git a/doc/source/install/environment-networking-verify-ubuntu.rst b/doc/source/install/environment-networking-verify-ubuntu.rst new file mode 100644 index 00000000000..dd0e16d5034 --- /dev/null +++ b/doc/source/install/environment-networking-verify-ubuntu.rst @@ -0,0 +1,87 @@ +Verify connectivity +------------------- + +We recommend that you verify network connectivity to the Internet and +among the nodes before proceeding further. + +#. From the *controller* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *controller* node, test access to the management interface on the + *compute* node: + + .. code-block:: console + + # ping -c 4 compute1 + + PING compute1 (10.0.0.31) 56(84) bytes of data. + 64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms + + --- compute1 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +#. From the *compute* node, test access to the Internet: + + .. code-block:: console + + # ping -c 4 openstack.org + + PING openstack.org (174.143.194.225) 56(84) bytes of data. + 64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms + 64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms + 64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms + + --- openstack.org ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3022ms + rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms + + .. end + +#. From the *compute* node, test access to the management interface on the + *controller* node: + + .. code-block:: console + + # ping -c 4 controller + + PING controller (10.0.0.11) 56(84) bytes of data. + 64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms + 64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms + 64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms + 64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms + + --- controller ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3000ms + rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms + + .. end + +.. note:: + + Your distribution does not enable a restrictive firewall by + default. For more information about securing your environment, + refer to the `OpenStack Security Guide + `_. + diff --git a/doc/source/install/figures/hwreqs.graffle b/doc/source/install/figures/hwreqs.graffle new file mode 100644 index 00000000000..522bb03cba5 Binary files /dev/null and b/doc/source/install/figures/hwreqs.graffle differ diff --git a/doc/source/install/figures/hwreqs.png b/doc/source/install/figures/hwreqs.png new file mode 100644 index 00000000000..5c7e2d0e8bf Binary files /dev/null and b/doc/source/install/figures/hwreqs.png differ diff --git a/doc/source/install/figures/hwreqs.svg b/doc/source/install/figures/hwreqs.svg new file mode 100644 index 00000000000..0b58db752fe --- /dev/null +++ b/doc/source/install/figures/hwreqs.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:57:28 +0000Canvas 1Layer 1Controller NodeCompute Node 11-2CPUBlock Storage Node 1Object Storage Node 1Object Storage Node 2Hardware RequirementsCore componentOptional component8 GBRAM100 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1-2CPU4 GBRAM2NIC2NIC1NIC1NIC4+ GBRAM1-2CPU1NIC100+ GBStorage100+ GBStorage/dev/sdb/dev/sdb/dev/sdc/dev/sdb/dev/sdc1-2CPU4+ GBRAM100+ GBStorage/dev/sdc diff --git a/doc/source/install/figures/network1-services.graffle b/doc/source/install/figures/network1-services.graffle new file mode 100644 index 00000000000..3e5bea9c616 Binary files /dev/null and b/doc/source/install/figures/network1-services.graffle differ diff --git a/doc/source/install/figures/network1-services.png b/doc/source/install/figures/network1-services.png new file mode 100644 index 00000000000..e83bf5bbf6d Binary files /dev/null and b/doc/source/install/figures/network1-services.png differ diff --git a/doc/source/install/figures/network1-services.svg b/doc/source/install/figures/network1-services.svg new file mode 100644 index 00000000000..153385b16a6 --- /dev/null +++ b/doc/source/install/figures/network1-services.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:56:09 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 1: Provider NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationTelemetryManagementObject StorageProxy ServiceNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceTelemetryAgentiSCSI TargetServiceNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemServiceShared File SystemManagementNoSQL DatabaseServiceNetworkingMetadata AgentDatabaseManagement diff --git a/doc/source/install/figures/network2-services.graffle b/doc/source/install/figures/network2-services.graffle new file mode 100644 index 00000000000..3642050ea6f Binary files /dev/null and b/doc/source/install/figures/network2-services.graffle differ diff --git a/doc/source/install/figures/network2-services.png b/doc/source/install/figures/network2-services.png new file mode 100644 index 00000000000..72b1fc915bf Binary files /dev/null and b/doc/source/install/figures/network2-services.png differ diff --git a/doc/source/install/figures/network2-services.svg b/doc/source/install/figures/network2-services.svg new file mode 100644 index 00000000000..4ff05a0c904 --- /dev/null +++ b/doc/source/install/figures/network2-services.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:55:33 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 2: Self-Service NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationDatabaseManagementObject StorageProxy ServiceNetworkingL3 AgentNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceShared File SystemServiceiSCSI TargetServiceNetworkingMetadata AgentNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemManagementTelemetryAgentNoSQL DatabaseServiceTelemetryManagement diff --git a/doc/source/install/figures/networklayout.graffle b/doc/source/install/figures/networklayout.graffle new file mode 100644 index 00000000000..3db6b2025b1 Binary files /dev/null and b/doc/source/install/figures/networklayout.graffle differ diff --git a/doc/source/install/figures/networklayout.png b/doc/source/install/figures/networklayout.png new file mode 100644 index 00000000000..21f6c4a4eb5 Binary files /dev/null and b/doc/source/install/figures/networklayout.png differ diff --git a/doc/source/install/figures/networklayout.svg b/doc/source/install/figures/networklayout.svg new file mode 100644 index 00000000000..29315fc8cf4 --- /dev/null +++ b/doc/source/install/figures/networklayout.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 15:08:44 +0000Canvas 1Layer 1 Controller Node 1 Compute Node 1Network LayoutManagement network10.0.0.0/24Provider network203.0.113.0/24 Block Storage Node 1 Object Storage Node 2 Object Storage Node 1Interface 2(unnumbered)Interface 2(unnumbered)InternetInterface 110.0.0.11/24Interface 110.0.0.31/24Interface 110.0.0.41/24Interface 110.0.0.52/24Interface 110.0.0.51/24NATCore componentOptional component diff --git a/doc/source/install/index.rst b/doc/source/install/index.rst new file mode 100644 index 00000000000..3fe68af7c25 --- /dev/null +++ b/doc/source/install/index.rst @@ -0,0 +1,23 @@ +.. _networking: + +================== +Networking service +================== + +.. toctree:: + :maxdepth: 1 + + overview.rst + common/get-started-networking.rst + concepts.rst + install-obs.rst + install-rdo.rst + install-ubuntu.rst + +This chapter explains how to install and configure the Networking +service (neutron) using the :ref:`provider networks ` or +:ref:`self-service networks ` option. + +For more information about the Networking service including virtual +networking components, layout, and traffic flows, see the +:doc:`OpenStack Networking Guide `. diff --git a/doc/source/install/install-obs.rst b/doc/source/install/install-obs.rst new file mode 100644 index 00000000000..e97f3f23649 --- /dev/null +++ b/doc/source/install/install-obs.rst @@ -0,0 +1,13 @@ +.. _networking-obs: + +============================================================ +Install and configure for openSUSE and SUSE Linux Enterprise +============================================================ + +.. toctree:: + :maxdepth: 2 + + environment-networking-obs.rst + controller-install-obs.rst + compute-install-obs.rst + verify.rst diff --git a/doc/source/install/install-rdo.rst b/doc/source/install/install-rdo.rst new file mode 100644 index 00000000000..99f83ddf692 --- /dev/null +++ b/doc/source/install/install-rdo.rst @@ -0,0 +1,13 @@ +.. _networking-rdo: + +============================================================= +Install and configure for Red Hat Enterprise Linux and CentOS +============================================================= + +.. toctree:: + :maxdepth: 2 + + environment-networking-rdo.rst + controller-install-rdo.rst + compute-install-rdo.rst + verify.rst diff --git a/doc/source/install/install-ubuntu.rst b/doc/source/install/install-ubuntu.rst new file mode 100644 index 00000000000..cfd0e31cefe --- /dev/null +++ b/doc/source/install/install-ubuntu.rst @@ -0,0 +1,13 @@ +.. _networking-ubuntu: + +================================ +Install and configure for Ubuntu +================================ + +.. toctree:: + :maxdepth: 2 + + environment-networking-ubuntu.rst + controller-install-ubuntu.rst + compute-install-ubuntu.rst + verify.rst diff --git a/doc/source/install/overview.rst b/doc/source/install/overview.rst new file mode 100644 index 00000000000..51ca7f0670f --- /dev/null +++ b/doc/source/install/overview.rst @@ -0,0 +1,179 @@ +======== +Overview +======== + +The OpenStack project is an open source cloud computing platform that +supports all types of cloud environments. The project aims for simple +implementation, massive scalability, and a rich set of features. Cloud +computing experts from around the world contribute to the project. + +OpenStack provides an Infrastructure-as-a-Service (IaaS) solution +through a variety of complementary services. Each service offers an +Application Programming Interface (API) that facilitates this +integration. + +This guide covers step-by-step deployment of the major OpenStack +services using a functional example architecture suitable for +new users of OpenStack with sufficient Linux experience. This guide is not +intended to be used for production system installations, but to create a +minimum proof-of-concept for the purpose of learning about OpenStack. + +After becoming familiar with basic installation, configuration, operation, +and troubleshooting of these OpenStack services, you should consider the +following steps toward deployment using a production architecture: + +* Determine and implement the necessary core and optional services to + meet performance and redundancy requirements. + +* Increase security using methods such as firewalls, encryption, and + service policies. + +* Implement a deployment tool such as Ansible, Chef, Puppet, or Salt + to automate deployment and management of the production environment. + +.. _overview-example-architectures: + +Example architecture +~~~~~~~~~~~~~~~~~~~~ + +The example architecture requires at least two nodes (hosts) to launch a basic +virtual machine (VM) or instance. Optional services such as Block Storage and +Object Storage require additional nodes. + +.. important:: + + The example architecture used in this guide is a minimum configuration, + and is not intended for production system installations. It is designed to + provide a minimum proof-of-concept for the purpose of learning about + OpenStack. For information on creating architectures for specific + use cases, or how to determine which architecture is required, see the + `Architecture Design Guide `_. + +This example architecture differs from a minimal production architecture as +follows: + +* Networking agents reside on the controller node instead of one or more + dedicated network nodes. + +* Overlay (tunnel) traffic for self-service networks traverses the management + network instead of a dedicated network. + +For more information on production architectures, see the +`Architecture Design Guide `_, +`OpenStack Operations Guide `_, and +:doc:`OpenStack Networking Guide `. + +.. _figure-hwreqs: + +.. figure:: figures/hwreqs.png + :alt: Hardware requirements + + **Hardware requirements** + +Controller +---------- + +The controller node runs the Identity service, Image service, management +portions of Compute, management portion of Networking, various Networking +agents, and the Dashboard. It also includes supporting services such as +an SQL database, message queue, and Network Time Protocol (NTP). + +Optionally, the controller node runs portions of the Block Storage, Object +Storage, Orchestration, and Telemetry services. + +The controller node requires a minimum of two network interfaces. + +Compute +------- + +The compute node runs the hypervisor portion of Compute that +operates instances. By default, Compute uses the kernel-based VM (KVM) +hypervisor. The compute node also runs a Networking service +agent that connects instances to virtual networks +and provides firewalling services to instances via security groups. + +You can deploy more than one compute node. Each node requires a minimum +of two network interfaces. + +Block Storage +------------- + +The optional Block Storage node contains the disks that the Block +Storage and Shared File System services provision for instances. + +For simplicity, service traffic between compute nodes and this node +uses the management network. Production environments should implement +a separate storage network to increase performance and security. + +You can deploy more than one block storage node. Each node requires a +minimum of one network interface. + +Object Storage +-------------- + +The optional Object Storage node contain the disks that the +Object Storage service uses for storing accounts, containers, and +objects. + +For simplicity, service traffic between compute nodes and this node +uses the management network. Production environments should implement +a separate storage network to increase performance and security. + +This service requires two nodes. Each node requires a minimum of one +network interface. You can deploy more than two object storage nodes. + +Networking +~~~~~~~~~~ + +Choose one of the following virtual networking options. + +.. _network1: + +Networking Option 1: Provider networks +-------------------------------------- + +The provider networks option deploys the OpenStack Networking service +in the simplest way possible with primarily layer-2 (bridging/switching) +services and VLAN segmentation of networks. Essentially, it bridges virtual +networks to physical networks and relies on physical network infrastructure +for layer-3 (routing) services. Additionally, a DHCP`_. + +Use the verification section for the networking option that you chose to +deploy. + +.. toctree:: + + verify-option1.rst + verify-option2.rst