Merge "[install] Fix various minor problems"
This commit is contained in:
commit
aa2079cab1
@ -14,13 +14,13 @@ Configure Cinder to use Telemetry
|
||||
Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
#. In the ``[DEFAULT]`` section, configure notifications:
|
||||
* In the ``[DEFAULT]`` section, configure notifications:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
notification_driver = messagingv2
|
||||
[DEFAULT]
|
||||
...
|
||||
notification_driver = messagingv2
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
@ -7,45 +7,45 @@ these steps on the controller node.
|
||||
Configure the Image service to use Telemetry
|
||||
--------------------------------------------
|
||||
|
||||
Edit the ``/etc/glance/glance-api.conf`` and
|
||||
``/etc/glance/glance-registry.conf`` files and
|
||||
complete the following actions:
|
||||
* Edit the ``/etc/glance/glance-api.conf`` and
|
||||
``/etc/glance/glance-registry.conf`` files and
|
||||
complete the following actions:
|
||||
|
||||
#. In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure notifications and RabbitMQ message broker access:
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure notifications and RabbitMQ message broker access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
notification_driver = messagingv2
|
||||
rpc_backend = rabbit
|
||||
[DEFAULT]
|
||||
...
|
||||
notification_driver = messagingv2
|
||||
rpc_backend = rabbit
|
||||
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
#. Restart the Image service:
|
||||
* Restart the Image service:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-glance-api.service openstack-glance-registry.service
|
||||
# systemctl restart openstack-glance-api.service openstack-glance-registry.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Restart the Image service:
|
||||
* Restart the Image service:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service glance-registry restart
|
||||
# service glance-api restart
|
||||
# service glance-registry restart
|
||||
# service glance-api restart
|
||||
|
@ -297,53 +297,53 @@ Finalize installation
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Start the Telemetry services and configure them to start when the
|
||||
system boots:
|
||||
* Start the Telemetry services and configure them to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-agent-notification.service \
|
||||
openstack-ceilometer-agent-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl start openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-agent-notification.service \
|
||||
openstack-ceilometer-agent-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl enable openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-agent-notification.service \
|
||||
openstack-ceilometer-agent-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl start openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-agent-notification.service \
|
||||
openstack-ceilometer-agent-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Start the Telemetry services and configure them to start when the
|
||||
system boots:
|
||||
* Start the Telemetry services and configure them to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-notification.service \
|
||||
openstack-ceilometer-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl start openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-notification.service \
|
||||
openstack-ceilometer-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl enable openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-notification.service \
|
||||
openstack-ceilometer-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
# systemctl start openstack-ceilometer-api.service \
|
||||
openstack-ceilometer-notification.service \
|
||||
openstack-ceilometer-central.service \
|
||||
openstack-ceilometer-collector.service \
|
||||
openstack-ceilometer-alarm-evaluator.service \
|
||||
openstack-ceilometer-alarm-notifier.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Restart the Telemetry services:
|
||||
* Restart the Telemetry services:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service ceilometer-agent-central restart
|
||||
# service ceilometer-agent-notification restart
|
||||
# service ceilometer-api restart
|
||||
# service ceilometer-collector restart
|
||||
# service ceilometer-alarm-evaluator restart
|
||||
# service ceilometer-alarm-notifier restart
|
||||
# service ceilometer-agent-central restart
|
||||
# service ceilometer-agent-notification restart
|
||||
# service ceilometer-api restart
|
||||
# service ceilometer-collector restart
|
||||
# service ceilometer-alarm-evaluator restart
|
||||
# service ceilometer-alarm-notifier restart
|
||||
|
@ -121,8 +121,7 @@ Finalize installation
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Start the Telemetry agent and configure it to start when the
|
||||
system boots:
|
||||
#. Start the agent and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -131,8 +130,7 @@ Finalize installation
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Start the Telemetry agent and configure it to start when the
|
||||
system boots:
|
||||
#. Start the agent and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -73,59 +73,59 @@ Configure Object Storage to use Telemetry
|
||||
Perform these steps on the controller and any other nodes that
|
||||
run the Object Storage proxy service.
|
||||
|
||||
#. Edit the ``/etc/swift/proxy-server.conf`` file
|
||||
and complete the following actions:
|
||||
* Edit the ``/etc/swift/proxy-server.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[filter:keystoneauth]`` section, add the
|
||||
``ResellerAdmin`` role:
|
||||
* In the ``[filter:keystoneauth]`` section, add the
|
||||
``ResellerAdmin`` role:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[filter:keystoneauth]
|
||||
...
|
||||
operator_roles = admin, user, ResellerAdmin
|
||||
[filter:keystoneauth]
|
||||
...
|
||||
operator_roles = admin, user, ResellerAdmin
|
||||
|
||||
* In the ``[pipeline:main]`` section, add ``ceilometer``:
|
||||
* In the ``[pipeline:main]`` section, add ``ceilometer``:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
|
||||
container_sync bulk ratelimit authtoken keystoneauth container-quotas
|
||||
account-quotas slo dlo versioned_writes proxy-logging ceilometer
|
||||
proxy-server
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
|
||||
container_sync bulk ratelimit authtoken keystoneauth container-quotas
|
||||
account-quotas slo dlo versioned_writes proxy-logging ceilometer
|
||||
proxy-server
|
||||
|
||||
* In the ``[filter:ceilometer]`` section, configure notifications:
|
||||
* In the ``[filter:ceilometer]`` section, configure notifications:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[filter:ceilometer]
|
||||
paste.filter_factory = ceilometermiddleware.swift:filter_factory
|
||||
...
|
||||
control_exchange = swift
|
||||
url = rabbit://openstack:RABBIT_PASS@controller:5672/
|
||||
driver = messagingv2
|
||||
topic = notifications
|
||||
log_level = WARN
|
||||
[filter:ceilometer]
|
||||
paste.filter_factory = ceilometermiddleware.swift:filter_factory
|
||||
...
|
||||
control_exchange = swift
|
||||
url = rabbit://openstack:RABBIT_PASS@controller:5672/
|
||||
driver = messagingv2
|
||||
topic = notifications
|
||||
log_level = WARN
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
#. Restart the Object Storage proxy service:
|
||||
* Restart the Object Storage proxy service:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-swift-proxy.service
|
||||
# systemctl restart openstack-swift-proxy.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Restart the Object Storage proxy service:
|
||||
* Restart the Object Storage proxy service:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service swift-proxy restart
|
||||
# service swift-proxy restart
|
||||
|
@ -314,23 +314,23 @@ Finalize installation
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service target.service
|
||||
# systemctl start openstack-cinder-volume.service target.service
|
||||
# systemctl enable openstack-cinder-volume.service target.service
|
||||
# systemctl start openstack-cinder-volume.service target.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
|
@ -1,207 +0,0 @@
|
||||
=====================
|
||||
Install and configure
|
||||
=====================
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The dashboard relies on functional core services including
|
||||
Identity, Image service, Compute, and either Networking (neutron)
|
||||
or legacy networking (nova-network). Environments with
|
||||
stand-alone services such as Object Storage cannot use the
|
||||
dashboard. For more information, see the
|
||||
`developer documentation <http://docs.openstack.org/developer/
|
||||
horizon/topics/deployment.html>`__.
|
||||
|
||||
This section assumes proper installation, configuration, and
|
||||
operation of the Identity service using the Apache HTTP server and
|
||||
Memcached as described in the ":doc:`keystone-install`" section.
|
||||
|
||||
To install the dashboard components
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-dashboard apache2-mod_wsgi \
|
||||
memcached python-python-memcached
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-dashboard httpd mod_wsgi \
|
||||
memcached python-memcached
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
* Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install openstack-dashboard
|
||||
|
||||
.. only:: debian
|
||||
|
||||
* Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install openstack-dashboard-apache
|
||||
|
||||
* Respond to prompts for web server configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
The automatic configuration process generates a self-signed
|
||||
SSL certificate. Consider obtaining an official certificate
|
||||
for production environments.
|
||||
|
||||
.. note::
|
||||
|
||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
||||
keeping your default vhost and only adding an Alias directive: this is
|
||||
the default. The other mode will remove the default Apache vhost and install
|
||||
the dashboard on the webroot. It was the only available option
|
||||
before the Liberty release. If you prefer to set the Apache configuration
|
||||
manually, install the ``openstack-dashboard`` package instead of
|
||||
``openstack-dashboard-apache``.
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
.. note::
|
||||
|
||||
Ubuntu installs the ``openstack-dashboard-ubuntu-theme``
|
||||
package as a dependency. Some users reported issues with
|
||||
this theme in previous releases. If you encounter issues,
|
||||
remove this package to restore the original OpenStack theme.
|
||||
|
||||
To configure the dashboard
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Configure the web server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
||||
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Edit the
|
||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Edit the
|
||||
``/etc/openstack-dashboard/local_settings``
|
||||
file and complete the following actions:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Edit the
|
||||
``/etc/openstack-dashboard/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
* Allow all hosts to access the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ALLOWED_HOSTS = ['*', ]
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': '127.0.0.1:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. note::
|
||||
|
||||
By default, SLES and openSUSE use an SQL database for session
|
||||
storage. For simplicity, we recommend changing the configuration
|
||||
to use ``memcached`` for session storage.
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
To finalize installation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
Reload the web server configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 reload
|
||||
|
||||
.. only:: obs
|
||||
|
||||
Start the web server and session storage service and configure
|
||||
them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable apache2.service memcached.service
|
||||
# systemctl restart apache2.service memcached.service
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||
not currently running.
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
Start the web server and session storage service and configure
|
||||
them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable httpd.service memcached.service
|
||||
# systemctl restart httpd.service memcached.service
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||
not currently running.
|
@ -12,36 +12,32 @@ service because most distributions support it. If you prefer to
|
||||
implement a different message queue service, consult the documentation
|
||||
associated with it.
|
||||
|
||||
Install the message queue service
|
||||
---------------------------------
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
* Install the package:
|
||||
1. Install the package:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install rabbitmq-server
|
||||
# apt-get install rabbitmq-server
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# yum install rabbitmq-server
|
||||
# yum install rabbitmq-server
|
||||
|
||||
.. only:: obs
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install rabbitmq-server
|
||||
|
||||
|
||||
Configure the message queue service
|
||||
-----------------------------------
|
||||
# zypper install rabbitmq-server
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
#. Start the message queue service and configure it to start when the
|
||||
2. Start the message queue service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
@ -71,7 +67,7 @@ Configure the message queue service
|
||||
|
||||
* Start the message queue service again.
|
||||
|
||||
#. Add the ``openstack`` user:
|
||||
3. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -80,7 +76,7 @@ Configure the message queue service
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
#. Permit configuration, write, and read access for the
|
||||
4. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
@ -90,7 +86,7 @@ Configure the message queue service
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Add the ``openstack`` user:
|
||||
2. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -99,7 +95,7 @@ Configure the message queue service
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
#. Permit configuration, write, and read access for the
|
||||
3. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
@ -7,13 +7,13 @@ additional storage node.
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the management interface:
|
||||
* Configure the management interface:
|
||||
|
||||
* IP address: ``10.0.0.41``
|
||||
* IP address: ``10.0.0.41``
|
||||
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
|
||||
* Default gateway: ``10.0.0.1``
|
||||
* Default gateway: ``10.0.0.1``
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
@ -10,13 +10,13 @@ First node
|
||||
Configure network interfaces
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
#. Configure the management interface:
|
||||
* Configure the management interface:
|
||||
|
||||
* IP address: ``10.0.0.51``
|
||||
* IP address: ``10.0.0.51``
|
||||
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
|
||||
* Default gateway: ``10.0.0.1``
|
||||
* Default gateway: ``10.0.0.1``
|
||||
|
||||
Configure name resolution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@ -33,13 +33,13 @@ Second node
|
||||
Configure network interfaces
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
#. Configure the management interface:
|
||||
* Configure the management interface:
|
||||
|
||||
* IP address: ``10.0.0.52``
|
||||
* IP address: ``10.0.0.52``
|
||||
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
* Network mask: ``255.255.255.0`` (or ``/24``)
|
||||
|
||||
* Default gateway: ``10.0.0.1``
|
||||
* Default gateway: ``10.0.0.1``
|
||||
|
||||
Configure name resolution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -12,12 +12,12 @@ MongoDB.
|
||||
The installation of the NoSQL database server is only necessary when
|
||||
installing the Telemetry service as documented in :ref:`install_ceilometer`.
|
||||
|
||||
Install and configure the database server
|
||||
-----------------------------------------
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Enable the Open Build Service repositories for MongoDB based on
|
||||
1. Enable the Open Build Service repositories for MongoDB based on
|
||||
your openSUSE or SLES version:
|
||||
|
||||
On openSUSE:
|
||||
@ -52,7 +52,7 @@ Install and configure the database server
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Install the MongoDB package:
|
||||
1. Install the MongoDB packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -60,7 +60,7 @@ Install and configure the database server
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Install the MongoDB package:
|
||||
1. Install the MongoDB packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -91,18 +91,8 @@ Install and configure the database server
|
||||
You can also disable journaling. For more information, see the
|
||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||
|
||||
* Start the MongoDB service and configure it to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mongodb.service
|
||||
# systemctl start mongodb.service
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. The use of mongod, and not mongodb, in the below screen is intentional.
|
||||
|
||||
2. Edit the ``/etc/mongod.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
@ -126,14 +116,6 @@ Install and configure the database server
|
||||
You can also disable journaling. For more information, see the
|
||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||
|
||||
* Start the MongoDB service and configure it to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mongod.service
|
||||
# systemctl start mongod.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
2. Edit the ``/etc/mongodb.conf`` file and complete the following
|
||||
@ -156,14 +138,39 @@ Install and configure the database server
|
||||
|
||||
smallfiles = true
|
||||
|
||||
If you change the journaling configuration, stop the MongoDB
|
||||
service, remove the initial journal files, and start the service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service mongodb stop
|
||||
# rm /var/lib/mongodb/journal/prealloc.*
|
||||
# service mongodb start
|
||||
|
||||
You can also disable journaling. For more information, see the
|
||||
`MongoDB manual <http://docs.mongodb.org/manual/>`__.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
* If you change the journaling configuration, stop the MongoDB
|
||||
service, remove the initial journal files, and start the service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service mongodb stop
|
||||
# rm /var/lib/mongodb/journal/prealloc.*
|
||||
# service mongodb start
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Start the MongoDB service and configure it to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mongod.service
|
||||
# systemctl start mongod.service
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Start the MongoDB service and configure it to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mongodb.service
|
||||
# systemctl start mongodb.service
|
||||
|
@ -3,60 +3,58 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
Install the packages:
|
||||
1. Install the packages:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install chrony
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. only:: obs
|
||||
|
||||
On openSUSE 13.2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
On SLES 12:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``17280DDF``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: network OBS Project <network@build.opensuse.org>
|
||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||
# apt-get install chrony
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative servers such
|
||||
as those provided by your organization.
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. only:: obs
|
||||
|
||||
On openSUSE:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
On SLES:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``17280DDF``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: network OBS Project <network@build.opensuse.org>
|
||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove the
|
||||
following keys as necessary for your environment:
|
||||
|
||||
.. code-block:: ini
|
||||
@ -67,7 +65,13 @@ as those provided by your organization.
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
#. Restart the NTP service:
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -75,7 +79,7 @@ as those provided by your organization.
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
#. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
|
||||
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove the
|
||||
following keys as necessary for your environment:
|
||||
|
||||
.. code-block:: ini
|
||||
@ -86,7 +90,13 @@ as those provided by your organization.
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
#. To enable other nodes to connect to the chrony daemon on the controller,
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the controller,
|
||||
add the following key to the ``/etc/chrony.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
@ -95,7 +105,7 @@ as those provided by your organization.
|
||||
|
||||
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
||||
|
||||
#. Start the NTP service and configure it to start when the system boots:
|
||||
4. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -3,66 +3,66 @@
|
||||
Other nodes
|
||||
~~~~~~~~~~~
|
||||
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
Install the packages:
|
||||
1. Install the packages:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install chrony
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. only:: obs
|
||||
|
||||
On openSUSE:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
On SLES:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``17280DDF``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: network OBS Project <network@build.opensuse.org>
|
||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||
# apt-get install chrony
|
||||
|
||||
Configure the network and compute nodes to reference the controller
|
||||
node.
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. only:: obs
|
||||
|
||||
On openSUSE:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/openSUSE_13.2 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
On SLES:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://network:time/SLE_12 network_time
|
||||
# zypper refresh
|
||||
# zypper install chrony
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``17280DDF``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: network OBS Project <network@build.opensuse.org>
|
||||
Key Fingerprint: 0080689B E757A876 CB7DC269 62EB1A09 17280DDF
|
||||
Key Created: Tue 24 Sep 2013 04:04:12 PM UTC
|
||||
Key Expires: Thu 03 Dec 2015 04:04:12 PM UTC
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||
but one ``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
server controller iburst
|
||||
|
||||
#. Restart the NTP service:
|
||||
3. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -70,14 +70,14 @@ node.
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
#. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||
``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
server controller iburst
|
||||
|
||||
#. Start the NTP service and configure it to start when the system boots:
|
||||
3. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -53,21 +53,21 @@ these procedures on all nodes.
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
On CentOS, the *extras* repository provides the RPM that enables the
|
||||
OpenStack repository. CentOS includes the *extras* repository by
|
||||
default, so you can simply install the package to enable the OpenStack
|
||||
repository.
|
||||
* On CentOS, the *extras* repository provides the RPM that enables the
|
||||
OpenStack repository. CentOS includes the *extras* repository by
|
||||
default, so you can simply install the package to enable the OpenStack
|
||||
repository.
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# yum install centos-release-openstack-liberty
|
||||
# yum install centos-release-openstack-liberty
|
||||
|
||||
On RHEL, download and install the RDO repository RPM to enable the
|
||||
OpenStack repository.
|
||||
* On RHEL, download and install the RDO repository RPM to enable the
|
||||
OpenStack repository.
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# yum install https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm
|
||||
# yum install https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm
|
||||
|
||||
.. only:: obs
|
||||
|
||||
@ -122,7 +122,6 @@ these procedures on all nodes.
|
||||
`Debian website <http://backports.debian.org/Instructions/>`_,
|
||||
which basically suggest doing the following steps:
|
||||
|
||||
|
||||
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
|
||||
the source list:
|
||||
|
||||
|
@ -7,17 +7,11 @@ guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <http://www.postgresql.org/>`__.
|
||||
|
||||
Install and configure the database server
|
||||
-----------------------------------------
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. only:: rdo or ubuntu or obs
|
||||
|
||||
.. note::
|
||||
|
||||
The Python MySQL library is compatible with MariaDB.
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
.. code-block:: console
|
||||
@ -34,7 +28,7 @@ Install and configure the database server
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install mariadb mariadb-server python2-PyMySQL
|
||||
# yum install mariadb mariadb-server MySQL-python
|
||||
|
||||
.. only:: obs
|
||||
|
||||
@ -116,9 +110,8 @@ Install and configure the database server
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
|
||||
To finalize installation
|
||||
------------------------
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
|
@ -374,15 +374,15 @@ Install and configure components
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
#. Start the Image service services and configure them to start when
|
||||
the system boots:
|
||||
* Start the Image service services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl start openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl enable openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl start openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
|
@ -489,19 +489,19 @@ Finalize installation
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
#. Start the Orchestration services and configure them to start
|
||||
when the system boots:
|
||||
* Start the Orchestration services and configure them to start
|
||||
when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-heat-api.service \
|
||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||
# systemctl start openstack-heat-api.service \
|
||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||
# systemctl enable openstack-heat-api.service \
|
||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||
# systemctl start openstack-heat-api.service \
|
||||
openstack-heat-api-cfn.service openstack-heat-engine.service
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Restart the Orchestration services:
|
||||
1. Restart the Orchestration services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
279
doc/install-guide/source/horizon-install.rst
Normal file
279
doc/install-guide/source/horizon-install.rst
Normal file
@ -0,0 +1,279 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The dashboard relies on functional core services including
|
||||
Identity, Image service, Compute, and either Networking (neutron)
|
||||
or legacy networking (nova-network). Environments with
|
||||
stand-alone services such as Object Storage cannot use the
|
||||
dashboard. For more information, see the
|
||||
`developer documentation <http://docs.openstack.org/developer/
|
||||
horizon/topics/deployment.html>`__.
|
||||
|
||||
.. note::
|
||||
|
||||
This section assumes proper installation, configuration, and operation
|
||||
of the Identity service using the Apache HTTP server and Memcached
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. only:: obs or rdo or ubuntu
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
.. only:: obs
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-dashboard
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-dashboard
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install openstack-dashboard
|
||||
|
||||
.. only:: debian
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install openstack-dashboard-apache
|
||||
|
||||
2. Respond to prompts for web server configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
The automatic configuration process generates a self-signed
|
||||
SSL certificate. Consider obtaining an official certificate
|
||||
for production environments.
|
||||
|
||||
.. note::
|
||||
|
||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
||||
keeping your default vhost and only adding an Alias directive: this is
|
||||
the default. The other mode will remove the default Apache vhost and install
|
||||
the dashboard on the webroot. It was the only available option
|
||||
before the Liberty release. If you prefer to set the Apache configuration
|
||||
manually, install the ``openstack-dashboard`` package instead of
|
||||
``openstack-dashboard-apache``.
|
||||
|
||||
.. only:: obs
|
||||
|
||||
2. Configure the web server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
||||
# a2enmod rewrite;a2enmod ssl;a2enmod wsgi
|
||||
|
||||
3. Edit the
|
||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
* Allow all hosts to access the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ALLOWED_HOSTS = ['*', ]
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': '127.0.0.1:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
* Allow all hosts to access the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ALLOWED_HOSTS = ['*', ]
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': '127.0.0.1:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
* Allow all hosts to access the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ALLOWED_HOSTS = ['*', ]
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': '127.0.0.1:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<http://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Reload the web server configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 reload
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Start the web server and session storage service and configure
|
||||
them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable apache2.service memcached.service
|
||||
# systemctl restart apache2.service memcached.service
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||
not currently running.
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Start the web server and session storage service and configure
|
||||
them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable httpd.service memcached.service
|
||||
# systemctl restart httpd.service memcached.service
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts the Apache HTTP service if
|
||||
not currently running.
|
@ -1,8 +1,7 @@
|
||||
================
|
||||
Verify operation
|
||||
================
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to verify operation of the dashboard.
|
||||
Verify operation of the dashboard.
|
||||
|
||||
.. only:: obs or debian
|
||||
|
@ -18,6 +18,6 @@ This example deployment uses an Apache web server.
|
||||
|
||||
.. toctree::
|
||||
|
||||
dashboard-install.rst
|
||||
dashboard-verify.rst
|
||||
dashboard-next-step.rst
|
||||
horizon-install.rst
|
||||
horizon-verify.rst
|
||||
horizon-next-steps.rst
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _keystone-install:
|
||||
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -12,37 +12,37 @@ To learn about the template language, see `the Template Guide
|
||||
in the `Heat developer documentation
|
||||
<http://docs.openstack.org/developer/heat/index.html>`__.
|
||||
|
||||
#. Create the ``demo-template.yml`` file with the following content:
|
||||
* Create the ``demo-template.yml`` file with the following content:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
heat_template_version: 2015-10-15
|
||||
description: Launch a basic instance using the ``m1.tiny`` flavor and one network.
|
||||
heat_template_version: 2015-10-15
|
||||
description: Launch a basic instance using the ``m1.tiny`` flavor and one network.
|
||||
|
||||
parameters:
|
||||
ImageID:
|
||||
type: string
|
||||
description: Image to use for the instance.
|
||||
NetID:
|
||||
type: string
|
||||
description: Network ID to use for the instance.
|
||||
parameters:
|
||||
ImageID:
|
||||
type: string
|
||||
description: Image to use for the instance.
|
||||
NetID:
|
||||
type: string
|
||||
description: Network ID to use for the instance.
|
||||
|
||||
resources:
|
||||
server:
|
||||
type: OS::Nova::Server
|
||||
properties:
|
||||
image: { get_param: ImageID }
|
||||
flavor: m1.tiny
|
||||
networks:
|
||||
- network: { get_param: NetID }
|
||||
resources:
|
||||
server:
|
||||
type: OS::Nova::Server
|
||||
properties:
|
||||
image: { get_param: ImageID }
|
||||
flavor: m1.tiny
|
||||
networks:
|
||||
- network: { get_param: NetID }
|
||||
|
||||
outputs:
|
||||
instance_name:
|
||||
description: Name of the instance.
|
||||
value: { get_attr: [ server, name ] }
|
||||
instance_ip:
|
||||
description: IP address of the instance.
|
||||
value: { get_attr: [ server, first_address ] }
|
||||
outputs:
|
||||
instance_name:
|
||||
description: Name of the instance.
|
||||
value: { get_attr: [ server, name ] }
|
||||
instance_ip:
|
||||
description: IP address of the instance.
|
||||
value: { get_attr: [ server, first_address ] }
|
||||
|
||||
Create a stack
|
||||
--------------
|
||||
|
@ -79,29 +79,29 @@ includes firewall rules that deny remote access to instances. For Linux
|
||||
images such as CirrOS, we recommend allowing at least ICMP (ping) and
|
||||
secure shell (SSH).
|
||||
|
||||
#. Add rules to the ``default`` security group:
|
||||
* Add rules to the ``default`` security group:
|
||||
|
||||
* Permit :term:`ICMP` (ping):
|
||||
* Permit :term:`ICMP` (ping):
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| icmp | -1 | -1 | 0.0.0.0/0 | |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| icmp | -1 | -1 | 0.0.0.0/0 | |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
|
||||
* Permit secure shell (SSH) access:
|
||||
* Permit secure shell (SSH) access:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| tcp | 22 | 22 | 0.0.0.0/0 | |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| IP Protocol | From Port | To Port | IP Range | Source Group |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
| tcp | 22 | 22 | 0.0.0.0/0 | |
|
||||
+-------------+-----------+---------+-----------+--------------+
|
||||
|
||||
Launch an instance
|
||||
------------------
|
||||
|
@ -10,44 +10,44 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances including VXLAN tunnels for private
|
||||
networks and handles security groups.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
|
||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
|
||||
Return to
|
||||
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
||||
|
@ -10,52 +10,52 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances including VXLAN tunnels for private
|
||||
networks and handles security groups.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
|
||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||
IP address of the physical network interface that handles overlay
|
||||
networks, and enable layer-2 population:
|
||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||
IP address of the physical network interface that handles overlay
|
||||
networks, and enable layer-2 population:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
l2_population = True
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
l2_population = True
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface.
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface.
|
||||
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
|
||||
Return to
|
||||
:ref:`Networking compute node configuration <neutron-compute-compute>`.
|
||||
|
@ -19,7 +19,7 @@ Install the components
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-neutron openstack-neutron-linuxbridge
|
||||
# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
|
||||
|
||||
.. only:: obs
|
||||
|
||||
@ -60,76 +60,76 @@ authentication mechanism, message queue, and plug-in.
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, comment out any ``connection`` options
|
||||
because compute nodes do not directly access the database.
|
||||
* In the ``[database]`` section, comment out any ``connection`` options
|
||||
because compute nodes do not directly access the database.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
|
||||
RabbitMQ message queue access:
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections, configure
|
||||
RabbitMQ message queue access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
||||
account in RabbitMQ.
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
||||
account in RabbitMQ.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Configure networking options
|
||||
----------------------------
|
||||
@ -154,26 +154,26 @@ configure services specific to it.
|
||||
Configure Compute to use Networking
|
||||
-----------------------------------
|
||||
|
||||
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``[neutron]`` section, configure access parameters:
|
||||
* In the ``[neutron]`` section, configure access parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron]
|
||||
...
|
||||
url = http://controller:9696
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[neutron]
|
||||
...
|
||||
url = http://controller:9696
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
@ -19,7 +19,7 @@ Install the components
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
||||
openstack-neutron-linuxbridge python-neutronclient
|
||||
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
|
||||
|
||||
.. only:: obs
|
||||
|
||||
@ -69,129 +69,129 @@ Install the components
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. only:: ubuntu or obs
|
||||
.. only:: ubuntu or obs
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
[database]
|
||||
...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
...
|
||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
[database]
|
||||
...
|
||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in and disable additional plug-ins:
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in and disable additional plug-ins:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
core_plugin = ml2
|
||||
service_plugins =
|
||||
[DEFAULT]
|
||||
...
|
||||
core_plugin = ml2
|
||||
service_plugins =
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure RabbitMQ message queue access:
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure RabbitMQ message queue access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
notify_nova_on_port_status_changes = True
|
||||
notify_nova_on_port_data_changes = True
|
||||
nova_url = http://controller:8774/v2
|
||||
[DEFAULT]
|
||||
...
|
||||
notify_nova_on_port_status_changes = True
|
||||
notify_nova_on_port_data_changes = True
|
||||
nova_url = http://controller:8774/v2
|
||||
|
||||
[nova]
|
||||
...
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
[nova]
|
||||
...
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||
the ``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||
the ``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Configure the Modular Layer 2 (ML2) plug-in
|
||||
-------------------------------------------
|
||||
@ -199,63 +199,63 @@ Configure the Modular Layer 2 (ML2) plug-in
|
||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||
and switching) virtual networking infrastructure for instances.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
type_drivers = flat,vlan
|
||||
[ml2]
|
||||
...
|
||||
type_drivers = flat,vlan
|
||||
|
||||
* In the ``[ml2]`` section, disable project (private) networks:
|
||||
* In the ``[ml2]`` section, disable project (private) networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
tenant_network_types =
|
||||
[ml2]
|
||||
...
|
||||
tenant_network_types =
|
||||
|
||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
mechanism_drivers = linuxbridge
|
||||
[ml2]
|
||||
...
|
||||
mechanism_drivers = linuxbridge
|
||||
|
||||
.. warning::
|
||||
.. warning::
|
||||
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
extension_drivers = port_security
|
||||
[ml2]
|
||||
...
|
||||
extension_drivers = port_security
|
||||
|
||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||
network:
|
||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||
network:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
...
|
||||
flat_networks = public
|
||||
[ml2_type_flat]
|
||||
...
|
||||
flat_networks = public
|
||||
|
||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||
efficiency of security group rules:
|
||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||
efficiency of security group rules:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_ipset = True
|
||||
[securitygroup]
|
||||
...
|
||||
enable_ipset = True
|
||||
|
||||
Configure the Linux bridge agent
|
||||
--------------------------------
|
||||
@ -264,73 +264,73 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances including VXLAN tunnels for private
|
||||
networks and handles security groups.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
|
||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
|
||||
Configure the DHCP agent
|
||||
------------------------
|
||||
|
||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
||||
|
||||
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||
networks can access metadata over the network:
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||
networks can access metadata over the network:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = True
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = True
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Return to
|
||||
:ref:`Networking controller node configuration
|
||||
|
@ -19,7 +19,7 @@ Install the components
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
||||
openstack-neutron-linuxbridge python-neutronclient
|
||||
openstack-neutron-linuxbridge python-neutronclient ebtables ipset
|
||||
|
||||
.. only:: obs
|
||||
|
||||
@ -63,130 +63,130 @@ Install the components
|
||||
Configure the server component
|
||||
------------------------------
|
||||
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. only:: ubuntu or obs
|
||||
.. only:: ubuntu or obs
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
[database]
|
||||
...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
...
|
||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
[database]
|
||||
...
|
||||
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in, router service, and overlapping IP addresses:
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in, router service, and overlapping IP addresses:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
core_plugin = ml2
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
[DEFAULT]
|
||||
...
|
||||
core_plugin = ml2
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure RabbitMQ message queue access:
|
||||
* In the ``[DEFAULT]`` and ``[oslo_messaging_rabbit]`` sections,
|
||||
configure RabbitMQ message queue access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
[DEFAULT]
|
||||
...
|
||||
rpc_backend = rabbit
|
||||
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
[oslo_messaging_rabbit]
|
||||
...
|
||||
rabbit_host = controller
|
||||
rabbit_userid = openstack
|
||||
rabbit_password = RABBIT_PASS
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[keystone_authtoken]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
notify_nova_on_port_status_changes = True
|
||||
notify_nova_on_port_data_changes = True
|
||||
nova_url = http://controller:8774/v2
|
||||
[DEFAULT]
|
||||
...
|
||||
notify_nova_on_port_status_changes = True
|
||||
notify_nova_on_port_data_changes = True
|
||||
nova_url = http://controller:8774/v2
|
||||
|
||||
[nova]
|
||||
...
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
[nova]
|
||||
...
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
|
||||
.. only:: rdo
|
||||
.. only:: rdo
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
[oslo_concurrency]
|
||||
...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||
the ``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in
|
||||
the ``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Configure the Modular Layer 2 (ML2) plug-in
|
||||
-------------------------------------------
|
||||
@ -194,77 +194,77 @@ Configure the Modular Layer 2 (ML2) plug-in
|
||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||
and switching) virtual networking infrastructure for instances.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
type_drivers = flat,vlan,vxlan
|
||||
[ml2]
|
||||
...
|
||||
type_drivers = flat,vlan,vxlan
|
||||
|
||||
* In the ``[ml2]`` section, enable VXLAN project (private) networks:
|
||||
* In the ``[ml2]`` section, enable VXLAN project (private) networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
tenant_network_types = vxlan
|
||||
[ml2]
|
||||
...
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
||||
mechanisms:
|
||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
||||
mechanisms:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
[ml2]
|
||||
...
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
|
||||
.. warning::
|
||||
.. warning::
|
||||
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
The Linux bridge agent only supports VXLAN overlay networks.
|
||||
The Linux bridge agent only supports VXLAN overlay networks.
|
||||
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
...
|
||||
extension_drivers = port_security
|
||||
[ml2]
|
||||
...
|
||||
extension_drivers = port_security
|
||||
|
||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||
network:
|
||||
* In the ``[ml2_type_flat]`` section, configure the public flat provider
|
||||
network:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
...
|
||||
flat_networks = public
|
||||
[ml2_type_flat]
|
||||
...
|
||||
flat_networks = public
|
||||
|
||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
||||
range for private networks:
|
||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
||||
range for private networks:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
...
|
||||
vni_ranges = 1:1000
|
||||
[ml2_type_vxlan]
|
||||
...
|
||||
vni_ranges = 1:1000
|
||||
|
||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||
efficiency of security group rules:
|
||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
||||
efficiency of security group rules:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_ipset = True
|
||||
[securitygroup]
|
||||
...
|
||||
enable_ipset = True
|
||||
|
||||
Configure the Linux bridge agent
|
||||
--------------------------------
|
||||
@ -273,52 +273,52 @@ The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances including VXLAN tunnels for private
|
||||
networks and handles security groups.
|
||||
|
||||
#. Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
* In the ``[linux_bridge]`` section, map the public virtual network to the
|
||||
public physical network interface:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = public:PUBLIC_INTERFACE_NAME
|
||||
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
Replace ``PUBLIC_INTERFACE_NAME`` with the name of the underlying physical
|
||||
public network interface.
|
||||
|
||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||
IP address of the physical network interface that handles overlay
|
||||
networks, and enable layer-2 population:
|
||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
||||
IP address of the physical network interface that handles overlay
|
||||
networks, and enable layer-2 population:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
l2_population = True
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
l2_population = True
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface.
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface.
|
||||
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
* In the ``[agent]`` section, enable ARP spoofing protection:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
[agent]
|
||||
...
|
||||
prevent_arp_spoofing = True
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Linux bridge :term:`iptables` firewall driver:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
[securitygroup]
|
||||
...
|
||||
enable_security_group = True
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
||||
|
||||
Configure the layer-3 agent
|
||||
---------------------------
|
||||
@ -326,105 +326,105 @@ Configure the layer-3 agent
|
||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for virtual
|
||||
networks.
|
||||
|
||||
#. Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
||||
and external network bridge:
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
||||
and external network bridge:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
external_network_bridge =
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally lacks a value
|
||||
to enable multiple external networks on a single agent.
|
||||
The ``external_network_bridge`` option intentionally lacks a value
|
||||
to enable multiple external networks on a single agent.
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Configure the DHCP agent
|
||||
------------------------
|
||||
|
||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
||||
|
||||
#. Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||
networks can access metadata over the network:
|
||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on public
|
||||
networks can access metadata over the network:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = True
|
||||
[DEFAULT]
|
||||
...
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = True
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Overlay networks such as VXLAN include additional packet headers that
|
||||
increase overhead and decrease space available for the payload or user
|
||||
data. Without knowledge of the virtual network infrastructure, instances
|
||||
attempt to send packets using the default Ethernet :term:`maximum
|
||||
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
|
||||
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
|
||||
end-to-end MTU and adjust packet size accordingly. However, some operating
|
||||
systems and networks block or otherwise lack support for PMTUD causing
|
||||
performance degradation or connectivity failure.
|
||||
Overlay networks such as VXLAN include additional packet headers that
|
||||
increase overhead and decrease space available for the payload or user
|
||||
data. Without knowledge of the virtual network infrastructure, instances
|
||||
attempt to send packets using the default Ethernet :term:`maximum
|
||||
transmission unit (MTU)` of 1500 bytes. :term:`Internet protocol (IP)`
|
||||
networks contain the :term:`path MTU discovery (PMTUD)` mechanism to detect
|
||||
end-to-end MTU and adjust packet size accordingly. However, some operating
|
||||
systems and networks block or otherwise lack support for PMTUD causing
|
||||
performance degradation or connectivity failure.
|
||||
|
||||
Ideally, you can prevent these problems by enabling :term:`jumbo frames
|
||||
<jumbo frame>` on the physical network that contains your tenant virtual
|
||||
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
|
||||
negates the impact of VXLAN overhead on virtual networks. However, many
|
||||
network devices lack support for jumbo frames and OpenStack administrators
|
||||
often lack control over network infrastructure. Given the latter
|
||||
complications, you can also prevent MTU problems by reducing the
|
||||
instance MTU to account for VXLAN overhead. Determining the proper MTU
|
||||
value often takes experimentation, but 1450 bytes works in most
|
||||
environments. You can configure the DHCP server that assigns IP
|
||||
addresses to your instances to also adjust the MTU.
|
||||
Ideally, you can prevent these problems by enabling :term:`jumbo frames
|
||||
<jumbo frame>` on the physical network that contains your tenant virtual
|
||||
networks. Jumbo frames support MTUs up to approximately 9000 bytes which
|
||||
negates the impact of VXLAN overhead on virtual networks. However, many
|
||||
network devices lack support for jumbo frames and OpenStack administrators
|
||||
often lack control over network infrastructure. Given the latter
|
||||
complications, you can also prevent MTU problems by reducing the
|
||||
instance MTU to account for VXLAN overhead. Determining the proper MTU
|
||||
value often takes experimentation, but 1450 bytes works in most
|
||||
environments. You can configure the DHCP server that assigns IP
|
||||
addresses to your instances to also adjust the MTU.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
Some cloud images ignore the DHCP MTU option in which case you
|
||||
should configure it using metadata, a script, or other suitable
|
||||
method.
|
||||
Some cloud images ignore the DHCP MTU option in which case you
|
||||
should configure it using metadata, a script, or other suitable
|
||||
method.
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the :term:`dnsmasq` configuration
|
||||
file:
|
||||
* In the ``[DEFAULT]`` section, enable the :term:`dnsmasq` configuration
|
||||
file:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
|
||||
[DEFAULT]
|
||||
...
|
||||
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
|
||||
|
||||
* Create and edit the ``/etc/neutron/dnsmasq-neutron.conf`` file to
|
||||
enable the DHCP MTU option (26) and configure it to 1450 bytes:
|
||||
* Create and edit the ``/etc/neutron/dnsmasq-neutron.conf`` file to
|
||||
enable the DHCP MTU option (26) and configure it to 1450 bytes:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
dhcp-option-force=26,1450
|
||||
dhcp-option-force=26,1450
|
||||
|
||||
Return to
|
||||
:ref:`Networking controller node configuration
|
||||
|
@ -171,86 +171,86 @@ Configure the metadata agent
|
||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
||||
such as credentials to instances.
|
||||
|
||||
#. Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
||||
actions:
|
||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure access parameters:
|
||||
* In the ``[DEFAULT]`` section, configure access parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_region = RegionOne
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[DEFAULT]
|
||||
...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
auth_region = RegionOne
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the metadata host:
|
||||
* In the ``[DEFAULT]`` section, configure the metadata host:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
nova_metadata_ip = controller
|
||||
[DEFAULT]
|
||||
...
|
||||
nova_metadata_ip = controller
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the metadata proxy shared
|
||||
secret:
|
||||
* In the ``[DEFAULT]`` section, configure the metadata proxy shared
|
||||
secret:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
[DEFAULT]
|
||||
...
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
||||
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
* (Optional) To assist with troubleshooting, enable verbose logging in the
|
||||
``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
[DEFAULT]
|
||||
...
|
||||
verbose = True
|
||||
|
||||
Configure Compute to use Networking
|
||||
-----------------------------------
|
||||
|
||||
#. Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
||||
|
||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
||||
metadata proxy, and configure the secret:
|
||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
||||
metadata proxy, and configure the secret:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron]
|
||||
...
|
||||
url = http://controller:9696
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
[neutron]
|
||||
...
|
||||
url = http://controller:9696
|
||||
auth_url = http://controller:35357
|
||||
auth_plugin = password
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
service_metadata_proxy = True
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
service_metadata_proxy = True
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
||||
proxy.
|
||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
||||
proxy.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
@ -1,19 +1,25 @@
|
||||
Networking Option 1: Provider networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List agents to verify successful launch of the neutron agents:
|
||||
.. todo:
|
||||
|
||||
.. code-block:: console
|
||||
Cannot use bulleted list here due to the following bug:
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
|
||||
|
||||
The output should indicate three agents on the controller node and one
|
||||
agent on each compute node.
|
||||
List agents to verify successful launch of the neutron agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
|
||||
The output should indicate three agents on the controller node and one
|
||||
agent on each compute node.
|
||||
|
@ -1,20 +1,26 @@
|
||||
Networking Option 2: Self-service networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List agents to verify successful launch of the neutron agents:
|
||||
.. todo:
|
||||
|
||||
.. code-block:: console
|
||||
Cannot use bulleted list here due to the following bug:
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | :-) | True | neutron-l3-agent |
|
||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
https://bugs.launchpad.net/openstack-manuals/+bug/1515377
|
||||
|
||||
The output should indicate four agents on the controller node and one
|
||||
agent on each compute node.
|
||||
List agents to verify successful launch of the neutron agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
|
||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
|
||||
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | :-) | True | neutron-l3-agent |
|
||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
|
||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
|
||||
|
||||
The output should indicate four agents on the controller node and one
|
||||
agent on each compute node.
|
||||
|
@ -247,7 +247,7 @@ on local devices.
|
||||
Distribute ring configuration files
|
||||
-----------------------------------
|
||||
|
||||
Copy the ``account.ring.gz``, ``container.ring.gz``, and
|
||||
``object.ring.gz`` files to the ``/etc/swift`` directory
|
||||
on each storage node and any additional nodes running the
|
||||
proxy service.
|
||||
* Copy the ``account.ring.gz``, ``container.ring.gz``, and
|
||||
``object.ring.gz`` files to the ``/etc/swift`` directory
|
||||
on each storage node and any additional nodes running the
|
||||
proxy service.
|
||||
|
Loading…
Reference in New Issue
Block a user