Upgrade the rst convention of the Reference Guide [1]
We upgrade the rst convention by following Documentation Contributor Guide[1]. [1] https://docs.openstack.org/doc-contrib-guide Change-Id: I10660e2df0e57be0800e26aa4d320074084c3acf Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
parent
3342bc76fa
commit
47eeacdc7b
@ -2,8 +2,7 @@
|
|||||||
Bifrost Guide
|
Bifrost Guide
|
||||||
=============
|
=============
|
||||||
|
|
||||||
From the bifrost developer documentation:
|
From the ``Bifrost`` developer documentation:
|
||||||
|
|
||||||
Bifrost (pronounced bye-frost) is a set of Ansible playbooks that automates
|
Bifrost (pronounced bye-frost) is a set of Ansible playbooks that automates
|
||||||
the task of deploying a base image onto a set of known hardware using
|
the task of deploying a base image onto a set of known hardware using
|
||||||
Ironic. It provides modular utility for one-off operating system
|
Ironic. It provides modular utility for one-off operating system
|
||||||
@ -16,7 +15,7 @@ container, as well as building a base OS image and provisioning it onto the
|
|||||||
baremetal nodes.
|
baremetal nodes.
|
||||||
|
|
||||||
Hosts in the System
|
Hosts in the System
|
||||||
===================
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In a system deployed by bifrost we define a number of classes of hosts.
|
In a system deployed by bifrost we define a number of classes of hosts.
|
||||||
|
|
||||||
@ -47,7 +46,7 @@ Bare metal compute hosts:
|
|||||||
OS images is currently out of scope.
|
OS images is currently out of scope.
|
||||||
|
|
||||||
Cloud Deployment Procedure
|
Cloud Deployment Procedure
|
||||||
==========================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Cloud deployment using kolla and bifrost follows the following high level
|
Cloud deployment using kolla and bifrost follows the following high level
|
||||||
steps:
|
steps:
|
||||||
@ -59,7 +58,7 @@ steps:
|
|||||||
#. Deploy OpenStack services on the cloud hosts provisioned by bifrost.
|
#. Deploy OpenStack services on the cloud hosts provisioned by bifrost.
|
||||||
|
|
||||||
Preparation
|
Preparation
|
||||||
===========
|
~~~~~~~~~~~
|
||||||
|
|
||||||
Prepare the Control Host
|
Prepare the Control Host
|
||||||
------------------------
|
------------------------
|
||||||
@ -78,16 +77,22 @@ has been configured to use, which with bifrost will be ``127.0.0.1``. Bifrost
|
|||||||
will attempt to modify ``/etc/hosts`` on the deployment host to ensure that
|
will attempt to modify ``/etc/hosts`` on the deployment host to ensure that
|
||||||
this is the case. Docker bind mounts ``/etc/hosts`` into the container from a
|
this is the case. Docker bind mounts ``/etc/hosts`` into the container from a
|
||||||
volume. This prevents atomic renames which will prevent Ansible from fixing
|
volume. This prevents atomic renames which will prevent Ansible from fixing
|
||||||
the
|
the ``/etc/hosts`` file automatically.
|
||||||
``/etc/hosts`` file automatically.
|
|
||||||
|
|
||||||
To enable bifrost to be bootstrapped correctly add an entry to ``/etc/hosts``
|
To enable bifrost to be bootstrapped correctly, add an entry to ``/etc/hosts``
|
||||||
resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
resolving the deployment host's hostname to ``127.0.0.1``, for example:
|
||||||
|
|
||||||
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
|
.. code-block:: console
|
||||||
|
|
||||||
|
cat /etc/hosts
|
||||||
127.0.0.1 bifrost localhost
|
127.0.0.1 bifrost localhost
|
||||||
|
|
||||||
# The following lines are desirable for IPv6 capable hosts
|
.. end
|
||||||
|
|
||||||
|
The following lines are desirable for IPv6 capable hosts:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
::1 ip6-localhost ip6-loopback
|
::1 ip6-localhost ip6-loopback
|
||||||
fe00::0 ip6-localnet
|
fe00::0 ip6-localnet
|
||||||
ff00::0 ip6-mcastprefix
|
ff00::0 ip6-mcastprefix
|
||||||
@ -96,64 +101,72 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
|||||||
ff02::3 ip6-allhosts
|
ff02::3 ip6-allhosts
|
||||||
192.168.100.15 bifrost
|
192.168.100.15 bifrost
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Build a Bifrost Container Image
|
Build a Bifrost Container Image
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This section provides instructions on how to build a container image for
|
This section provides instructions on how to build a container image for
|
||||||
bifrost using kolla.
|
bifrost using kolla.
|
||||||
|
|
||||||
Enable Source Build Type
|
Currently kolla only supports the ``source`` install type for the bifrost image.
|
||||||
------------------------
|
|
||||||
|
|
||||||
Currently kolla only supports the source install type for the bifrost image.
|
#. To generate kolla-build.conf configuration File
|
||||||
|
|
||||||
Configuration File
|
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
If required, generate a default configuration file for ``kolla-build``::
|
* If required, generate a default configuration file for :command:`kolla-build`:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
cd kolla
|
cd kolla
|
||||||
tox -e genconfig
|
tox -e genconfig
|
||||||
|
|
||||||
Modify ``kolla-build.conf``, setting ``install_type`` to ``source``::
|
.. end
|
||||||
|
|
||||||
|
* Modify ``kolla-build.conf``, setting ``install_type`` to ``source``:
|
||||||
|
|
||||||
|
.. path etc/kolla/kolla-build.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
install_type = source
|
install_type = source
|
||||||
|
|
||||||
Command line
|
.. end
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Alternatively, instead of using ``kolla-build.conf``, a source build can be
|
Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can
|
||||||
enabled by appending ``--type source`` to the ``kolla-build`` or
|
be enabled by appending ``--type source`` to the :command:`kolla-build` or
|
||||||
``tools/build.py`` command.
|
``tools/build.py`` command.
|
||||||
|
|
||||||
Build Container
|
#. To build images, for Development:
|
||||||
---------------
|
|
||||||
|
|
||||||
Development
|
.. code-block:: console
|
||||||
~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
cd kolla
|
cd kolla
|
||||||
tools/build.py bifrost-deploy
|
tools/build.py bifrost-deploy
|
||||||
|
|
||||||
Production
|
.. end
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
For Production:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-build bifrost-deploy
|
kolla-build bifrost-deploy
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
By default kolla-build will build all containers using CentOS as the base
|
By default :command:`kolla-build` will build all containers using CentOS as
|
||||||
image. To change this behavior, use the following parameter with
|
the base image. To change this behavior, use the following parameter with
|
||||||
``kolla-build`` or ``tools/build.py`` command::
|
:command:`kolla-build` or ``tools/build.py`` command:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
--base [ubuntu|centos|oraclelinux]
|
--base [ubuntu|centos|oraclelinux]
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configure and Deploy a Bifrost Container
|
Configure and Deploy a Bifrost Container
|
||||||
========================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This section provides instructions for how to configure and deploy a container
|
This section provides instructions for how to configure and deploy a container
|
||||||
running bifrost services.
|
running bifrost services.
|
||||||
@ -166,8 +179,8 @@ group. In the ``all-in-one`` and ``multinode`` inventory files, a ``bifrost``
|
|||||||
group is defined which contains all hosts in the ``deployment`` group. This
|
group is defined which contains all hosts in the ``deployment`` group. This
|
||||||
top level ``deployment`` group is intended to represent the host running the
|
top level ``deployment`` group is intended to represent the host running the
|
||||||
``bifrost_deploy`` container. By default, this group contains ``localhost``.
|
``bifrost_deploy`` container. By default, this group contains ``localhost``.
|
||||||
See :doc:`/user/multinode`
|
See :doc:`/user/multinode` for details on how to modify the Ansible inventory
|
||||||
for details on how to modify the Ansible inventory in a multinode deployment.
|
in a multinode deployment.
|
||||||
|
|
||||||
Bifrost does not currently support running on multiple hosts so the ``bifrost``
|
Bifrost does not currently support running on multiple hosts so the ``bifrost``
|
||||||
group should contain only a single host, however this is not enforced by
|
group should contain only a single host, however this is not enforced by
|
||||||
@ -189,6 +202,8 @@ different than ``network_interface``. For example to use ``eth1``:
|
|||||||
|
|
||||||
bifrost_network_interface: eth1
|
bifrost_network_interface: eth1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Note that this interface should typically have L2 network connectivity with the
|
Note that this interface should typically have L2 network connectivity with the
|
||||||
bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
|
bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
|
||||||
|
|
||||||
@ -199,6 +214,8 @@ reflected in ``globals.yml``
|
|||||||
|
|
||||||
kolla_install_type: source
|
kolla_install_type: source
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Prepare Bifrost Configuration
|
Prepare Bifrost Configuration
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
@ -247,6 +264,8 @@ properties and a logical name.
|
|||||||
cpus: "16"
|
cpus: "16"
|
||||||
name: "cloud1"
|
name: "cloud1"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The required inventory will be specific to the hardware and environment in use.
|
The required inventory will be specific to the hardware and environment in use.
|
||||||
|
|
||||||
Create Bifrost Configuration
|
Create Bifrost Configuration
|
||||||
@ -254,9 +273,7 @@ Create Bifrost Configuration
|
|||||||
|
|
||||||
The file ``bifrost.yml`` provides global configuration for the bifrost
|
The file ``bifrost.yml`` provides global configuration for the bifrost
|
||||||
playbooks. By default kolla mostly uses bifrost's default variable values.
|
playbooks. By default kolla mostly uses bifrost's default variable values.
|
||||||
For details on bifrost's variables see the bifrost documentation.
|
For details on bifrost's variables see the bifrost documentation. For example:
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -269,6 +286,8 @@ For example:
|
|||||||
# dhcp_lease_time: 12h
|
# dhcp_lease_time: 12h
|
||||||
# dhcp_static_mask: 255.255.255.0
|
# dhcp_static_mask: 255.255.255.0
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create Disk Image Builder Configuration
|
Create Disk Image Builder Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -278,52 +297,56 @@ building the baremetal OS and deployment images, and will build an
|
|||||||
**Ubuntu-based** image for deployment to nodes. For details on bifrost's
|
**Ubuntu-based** image for deployment to nodes. For details on bifrost's
|
||||||
variables see the bifrost documentation.
|
variables see the bifrost documentation.
|
||||||
|
|
||||||
For example to use the ``debian`` Disk Image Builder OS element:
|
For example, to use the ``debian`` Disk Image Builder OS element:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
dib_os_element: debian
|
dib_os_element: debian
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
See the `diskimage-builder documentation
|
See the `diskimage-builder documentation
|
||||||
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
|
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
|
||||||
|
|
||||||
Deploy Bifrost
|
Deploy Bifrost
|
||||||
--------------
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The bifrost container can be deployed either using kolla-ansible or manually.
|
The bifrost container can be deployed either using kolla-ansible or manually.
|
||||||
|
|
||||||
Kolla-Ansible
|
Deploy Bifrost using Kolla-Ansible
|
||||||
~~~~~~~~~~~~~
|
----------------------------------
|
||||||
|
|
||||||
Development
|
For development:
|
||||||
___________
|
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
cd kolla-ansible
|
cd kolla-ansible
|
||||||
tools/kolla-ansible deploy-bifrost
|
tools/kolla-ansible deploy-bifrost
|
||||||
|
|
||||||
Production
|
.. end
|
||||||
__________
|
|
||||||
|
|
||||||
::
|
For Production:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy-bifrost
|
kolla-ansible deploy-bifrost
|
||||||
|
|
||||||
Manual
|
.. end
|
||||||
~~~~~~
|
|
||||||
|
|
||||||
Start Bifrost Container
|
Deploy Bifrost manually
|
||||||
_______________________
|
-----------------------
|
||||||
|
|
||||||
::
|
#. Start Bifrost Container
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
docker run -it --net=host -v /dev:/dev -d \
|
docker run -it --net=host -v /dev:/dev -d \
|
||||||
--privileged --name bifrost_deploy \
|
--privileged --name bifrost_deploy \
|
||||||
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
||||||
|
|
||||||
Copy Configuration Files
|
.. end
|
||||||
________________________
|
|
||||||
|
#. Copy Configuration Files
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -332,22 +355,25 @@ ________________________
|
|||||||
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
||||||
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
||||||
|
|
||||||
Bootstrap Bifrost
|
.. end
|
||||||
_________________
|
|
||||||
|
|
||||||
::
|
#. Bootstrap Bifrost
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec -it bifrost_deploy bash
|
docker exec -it bifrost_deploy bash
|
||||||
|
|
||||||
Generate an SSH Key
|
.. end
|
||||||
___________________
|
|
||||||
|
|
||||||
::
|
#. Generate an SSH Key
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
ssh-keygen
|
ssh-keygen
|
||||||
|
|
||||||
Bootstrap and Start Services
|
.. end
|
||||||
____________________________
|
|
||||||
|
#. Bootstrap and Start Services
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -364,8 +390,10 @@ ____________________________
|
|||||||
-e @/etc/bifrost/dib.yml \
|
-e @/etc/bifrost/dib.yml \
|
||||||
-e skip_package_install=true
|
-e skip_package_install=true
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Validate the Deployed Container
|
Validate the Deployed Container
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -373,6 +401,8 @@ Validate the Deployed Container
|
|||||||
cd /bifrost
|
cd /bifrost
|
||||||
. env-vars
|
. env-vars
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Running "ironic node-list" should return with no nodes, for example
|
Running "ironic node-list" should return with no nodes, for example
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@ -383,32 +413,37 @@ Running "ironic node-list" should return with no nodes, for example
|
|||||||
+------+------+---------------+-------------+--------------------+-------------+
|
+------+------+---------------+-------------+--------------------+-------------+
|
||||||
+------+------+---------------+-------------+--------------------+-------------+
|
+------+------+---------------+-------------+--------------------+-------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enroll and Deploy Physical Nodes
|
Enroll and Deploy Physical Nodes
|
||||||
================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Once we have deployed a bifrost container we can use it to provision the bare
|
Once we have deployed a bifrost container we can use it to provision the bare
|
||||||
metal cloud hosts specified in the inventory file. Again, this can be done
|
metal cloud hosts specified in the inventory file. Again, this can be done
|
||||||
either using kolla-ansible or manually.
|
either using kolla-ansible or manually.
|
||||||
|
|
||||||
Kolla-Ansible
|
By Kolla-Ansible
|
||||||
-------------
|
----------------
|
||||||
|
|
||||||
Development
|
For Development:
|
||||||
~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
tools/kolla-ansible deploy-servers
|
tools/kolla-ansible deploy-servers
|
||||||
|
|
||||||
Production
|
.. end
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
For Production:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy-servers
|
kolla-ansible deploy-servers
|
||||||
|
|
||||||
Manual
|
.. end
|
||||||
------
|
|
||||||
|
Manually
|
||||||
|
--------
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -432,11 +467,13 @@ Manual
|
|||||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||||
-e @/etc/bifrost/bifrost.yml
|
-e @/etc/bifrost/bifrost.yml
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
At this point Ironic should clean down the nodes and install the default
|
At this point Ironic should clean down the nodes and install the default
|
||||||
OS image.
|
OS image.
|
||||||
|
|
||||||
Advanced Configuration
|
Advanced Configuration
|
||||||
======================
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Bring Your Own Image
|
Bring Your Own Image
|
||||||
--------------------
|
--------------------
|
||||||
@ -450,7 +487,7 @@ To use your own SSH key after you have generated the ``passwords.yml`` file
|
|||||||
update the private and public keys under ``bifrost_ssh_key``.
|
update the private and public keys under ``bifrost_ssh_key``.
|
||||||
|
|
||||||
Known issues
|
Known issues
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
SSH daemon not running
|
SSH daemon not running
|
||||||
----------------------
|
----------------------
|
||||||
@ -458,18 +495,20 @@ SSH daemon not running
|
|||||||
By default ``sshd`` is installed in the image but may not be enabled. If you
|
By default ``sshd`` is installed in the image but may not be enabled. If you
|
||||||
encounter this issue you will have to access the server physically in recovery
|
encounter this issue you will have to access the server physically in recovery
|
||||||
mode to enable the ``sshd`` service. If your hardware supports it, this can be
|
mode to enable the ``sshd`` service. If your hardware supports it, this can be
|
||||||
done remotely with ``ipmitool`` and Serial Over LAN. For example
|
done remotely with :command:`ipmitool` and Serial Over LAN. For example
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
References
|
References
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Bifrost documentation: https://docs.openstack.org/bifrost/latest/
|
* `Bifrost documentation <https://docs.openstack.org/bifrost/latest/>`__
|
||||||
|
|
||||||
Bifrost troubleshooting guide: https://docs.openstack.org/bifrost/latest/user/troubleshooting.html
|
* `Bifrost troubleshooting guide <https://docs.openstack.org/bifrost/latest/user/troubleshooting.html>`__
|
||||||
|
|
||||||
Bifrost code repository: https://github.com/openstack/bifrost
|
* `Bifrost code repository <https://github.com/openstack/bifrost>`__
|
||||||
|
|
||||||
|
@ -9,17 +9,19 @@ successfully monitor this and use it to diagnose problems, the standard "ssh
|
|||||||
and grep" solution quickly becomes unmanageable.
|
and grep" solution quickly becomes unmanageable.
|
||||||
|
|
||||||
Preparation and deployment
|
Preparation and deployment
|
||||||
==========================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
||||||
the following:
|
the following:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_central_logging: "yes"
|
enable_central_logging: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Elasticsearch
|
Elasticsearch
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Kolla deploys Elasticsearch as part of the E*K stack to store, organize
|
Kolla deploys Elasticsearch as part of the E*K stack to store, organize
|
||||||
and make logs easily accessible.
|
and make logs easily accessible.
|
||||||
@ -31,9 +33,8 @@ By default Elasticsearch is deployed on port ``9200``.
|
|||||||
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
||||||
remember to give ``/var/lib/docker`` an adequate space.
|
remember to give ``/var/lib/docker`` an adequate space.
|
||||||
|
|
||||||
|
|
||||||
Kibana
|
Kibana
|
||||||
======
|
~~~~~~
|
||||||
|
|
||||||
Kolla deploys Kibana as part of the E*K stack in order to allow operators to
|
Kolla deploys Kibana as part of the E*K stack in order to allow operators to
|
||||||
search and visualise logs in a centralised manner.
|
search and visualise logs in a centralised manner.
|
||||||
@ -82,20 +83,24 @@ host was found'.
|
|||||||
|
|
||||||
First, re-run the server creation with ``--debug``:
|
First, re-run the server creation with ``--debug``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack --debug server create --image cirros --flavor m1.tiny \
|
openstack --debug server create --image cirros --flavor m1.tiny \
|
||||||
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
||||||
demo1
|
demo1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
|
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
|
||||||
identifier that can be used to track the request through the system. An
|
identifier that can be used to track the request through the system. An
|
||||||
example ID looks like this:
|
example ID looks like this:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
|
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
|
||||||
search bar, minus the leading ``req-``. Assuming some basic filters have been
|
search bar, minus the leading ``req-``. Assuming some basic filters have been
|
||||||
added as shown in the previous section, Kibana should now show the path this
|
added as shown in the previous section, Kibana should now show the path this
|
||||||
@ -124,7 +129,9 @@ generated and previewed. In the menu on the left, metrics for a chart can
|
|||||||
be chosen. The chart can be generated by pressing a green arrow on the top
|
be chosen. The chart can be generated by pressing a green arrow on the top
|
||||||
of the left-side menu.
|
of the left-side menu.
|
||||||
|
|
||||||
.. note:: After creating a visualization, it can be saved by choosing *save
|
.. note::
|
||||||
|
|
||||||
|
After creating a visualization, it can be saved by choosing *save
|
||||||
visualization* option in the menu on the right. If it is not saved, it
|
visualization* option in the menu on the right. If it is not saved, it
|
||||||
will be lost after leaving a page or creating another visualization.
|
will be lost after leaving a page or creating another visualization.
|
||||||
|
|
||||||
@ -138,7 +145,9 @@ from all saved ones. The order and size of elements can be changed directly
|
|||||||
in this place by moving them or resizing. The color of charts can also be
|
in this place by moving them or resizing. The color of charts can also be
|
||||||
changed by checking a colorful dots on the legend near each visualization.
|
changed by checking a colorful dots on the legend near each visualization.
|
||||||
|
|
||||||
.. note:: After creating a dashboard, it can be saved by choosing *save dashboard*
|
.. note::
|
||||||
|
|
||||||
|
After creating a dashboard, it can be saved by choosing *save dashboard*
|
||||||
option in the menu on the right. If it is not saved, it will be lost after
|
option in the menu on the right. If it is not saved, it will be lost after
|
||||||
leaving a page or creating another dashboard.
|
leaving a page or creating another dashboard.
|
||||||
|
|
||||||
@ -156,7 +165,7 @@ In the same tab (Settings - Objects) one can also import saved items by
|
|||||||
choosing *import* option.
|
choosing *import* option.
|
||||||
|
|
||||||
Custom log forwarding
|
Custom log forwarding
|
||||||
=====================
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In some scenarios it may be useful to forward logs to a logging service other
|
In some scenarios it may be useful to forward logs to a logging service other
|
||||||
than elasticsearch. This can be done by configuring custom fluentd outputs.
|
than elasticsearch. This can be done by configuring custom fluentd outputs.
|
||||||
|
@ -10,13 +10,13 @@ tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
|
|||||||
host and a single block device.
|
host and a single block device.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
* A minimum of 3 hosts for a vanilla deploy
|
* A minimum of 3 hosts for a vanilla deploy
|
||||||
* A minimum of 1 block device per host
|
* A minimum of 1 block device per host
|
||||||
|
|
||||||
Preparation
|
Preparation
|
||||||
===========
|
~~~~~~~~~~~
|
||||||
|
|
||||||
To prepare a disk for use as a
|
To prepare a disk for use as a
|
||||||
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
||||||
@ -26,16 +26,20 @@ will be reformatted so use caution.
|
|||||||
|
|
||||||
To prepare an OSD as a storage drive, execute the following operations:
|
To prepare an OSD as a storage drive, execute the following operations:
|
||||||
|
|
||||||
::
|
.. warning::
|
||||||
|
|
||||||
|
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The following shows an example of using parted to configure ``/dev/sdb`` for
|
The following shows an example of using parted to configure ``/dev/sdb`` for
|
||||||
usage with Kolla.
|
usage with Kolla.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
parted /dev/sdb print
|
parted /dev/sdb print
|
||||||
@ -46,6 +50,7 @@ usage with Kolla.
|
|||||||
Number Start End Size File system Name Flags
|
Number Start End Size File system Name Flags
|
||||||
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Using an external journal drive
|
Using an external journal drive
|
||||||
-------------------------------
|
-------------------------------
|
||||||
@ -59,20 +64,24 @@ journal drive. This section documents how to use an external journal drive.
|
|||||||
|
|
||||||
Prepare the storage drive in the same way as documented above:
|
Prepare the storage drive in the same way as documented above:
|
||||||
|
|
||||||
::
|
.. warning::
|
||||||
|
|
||||||
|
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
To prepare the journal external drive execute the following command:
|
To prepare the journal external drive execute the following command:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
||||||
# where $DISK is /dev/sdc or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external
|
Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external
|
||||||
@ -88,47 +97,57 @@ To prepare the journal external drive execute the following command:
|
|||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Edit the [storage] group in the inventory which contains the hostname of the
|
Edit the ``[storage]`` group in the inventory which contains the hostname of the
|
||||||
hosts that have the block devices you have prepped as shown above.
|
hosts that have the block devices you have prepped as shown above.
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[storage]
|
[storage]
|
||||||
controller
|
controller
|
||||||
compute1
|
compute1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enable Ceph in ``/etc/kolla/globals.yml``:
|
Enable Ceph in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph_rgw: "yes"
|
enable_ceph_rgw: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
||||||
start up, RGW will create several pools. The first pool should be in an
|
start up, RGW will create several pools. The first pool should be in an
|
||||||
operational state to proceed with the second one, and so on. So, in the case of
|
operational state to proceed with the second one, and so on. So, in the case of
|
||||||
an **all-in-one** deployment, it is necessary to change the default number of
|
an **all-in-one** deployment, it is necessary to change the default number of
|
||||||
copies for the pools before deployment. Modify the file
|
copies for the pools before deployment. Modify the file
|
||||||
``/etc/kolla/config/ceph.conf`` and add the contents::
|
``/etc/kolla/config/ceph.conf`` and add the contents:
|
||||||
|
|
||||||
|
.. path /etc/kolla/config/ceph.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[global]
|
[global]
|
||||||
osd pool default size = 1
|
osd pool default size = 1
|
||||||
osd pool default min size = 1
|
osd pool default min size = 1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
To build a high performance and secure Ceph Storage Cluster, the Ceph community
|
To build a high performance and secure Ceph Storage Cluster, the Ceph community
|
||||||
recommend the use of two separate networks: public network and cluster network.
|
recommend the use of two separate networks: public network and cluster network.
|
||||||
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
|
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. path /etc/kolla/globals.yml
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
cluster_interface: "eth2"
|
cluster_interface: "eth2"
|
||||||
|
|
||||||
@ -139,46 +158,52 @@ For more details, see `NETWORK CONFIGURATION REFERENCE
|
|||||||
of Ceph Documentation.
|
of Ceph Documentation.
|
||||||
|
|
||||||
Deployment
|
Deployment
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Finally deploy the Ceph-enabled OpenStack:
|
Finally deploy the Ceph-enabled OpenStack:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory
|
kolla-ansible deploy -i path/to/inventory
|
||||||
|
|
||||||
Using a Cache Tier
|
.. end
|
||||||
==================
|
|
||||||
|
|
||||||
An optional `cache tier <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
Using a Cache Tiering
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An optional `cache tiering <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
||||||
can be deployed by formatting at least one cache device and enabling cache.
|
can be deployed by formatting at least one cache device and enabling cache.
|
||||||
tiering in the globals.yml configuration file.
|
tiering in the globals.yml configuration file.
|
||||||
|
|
||||||
To prepare an OSD as a cache device, execute the following operations:
|
To prepare an OSD as a cache device, execute the following operations:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
|
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
ceph_enable_cache: "yes"
|
ceph_enable_cache: "yes"
|
||||||
# Valid options are [ forward, none, writeback ]
|
# Valid options are [ forward, none, writeback ]
|
||||||
ceph_cache_mode: "writeback"
|
ceph_cache_mode: "writeback"
|
||||||
|
|
||||||
After this run the playbooks as you normally would. For example:
|
.. end
|
||||||
|
|
||||||
::
|
After this run the playbooks as you normally would, for example:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory
|
kolla-ansible deploy -i path/to/inventory
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Setting up an Erasure Coded Pool
|
Setting up an Erasure Coded Pool
|
||||||
================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
|
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
|
||||||
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
||||||
@ -191,7 +216,7 @@ completely removing the pool and recreating it.
|
|||||||
To enable erasure coded pools add the following options to your
|
To enable erasure coded pools add the following options to your
|
||||||
``/etc/kolla/globals.yml`` configuration file:
|
``/etc/kolla/globals.yml`` configuration file:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
||||||
# Valid options are [ erasure, replicated ]
|
# Valid options are [ erasure, replicated ]
|
||||||
@ -199,15 +224,18 @@ To enable erasure coded pools add the following options to your
|
|||||||
# Optionally, you can change the profile
|
# Optionally, you can change the profile
|
||||||
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Managing Ceph
|
Managing Ceph
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Check the Ceph status for more diagnostic information. The sample output below
|
Check the Ceph status for more diagnostic information. The sample output below
|
||||||
indicates a healthy cluster:
|
indicates a healthy cluster:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph -s
|
docker exec ceph_mon ceph -s
|
||||||
|
|
||||||
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
||||||
health HEALTH_OK
|
health HEALTH_OK
|
||||||
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
||||||
@ -222,31 +250,39 @@ nodes, further configuration is required. It is necessary to change the default
|
|||||||
number of copies for the pool. The following example demonstrates how to change
|
number of copies for the pool. The following example demonstrates how to change
|
||||||
the number of copies for the pool to 1:
|
the number of copies for the pool to 1:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph osd pool set rbd size 1
|
docker exec ceph_mon ceph osd pool set rbd size 1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
||||||
An example of modifying the pools to have 2 copies:
|
An example of modifying the pools to have 2 copies:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
If using a cache tier, these changes must be made as well:
|
If using a cache tier, these changes must be made as well:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
|
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Troubleshooting
|
Troubleshooting
|
||||||
===============
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
|
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
|
||||||
------------------------------------------------------------------------------
|
------------------------------------------------------------------------------
|
||||||
@ -258,16 +294,14 @@ successful deploy.
|
|||||||
In order to do this the operator should remove the `ceph_mon_config` volume
|
In order to do this the operator should remove the `ceph_mon_config` volume
|
||||||
from each Ceph monitor node:
|
from each Ceph monitor node:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
ansible \
|
ansible -i ansible/inventory/multinode \
|
||||||
-i ansible/inventory/multinode \
|
|
||||||
-a 'docker volume rm ceph_mon_config' \
|
-a 'docker volume rm ceph_mon_config' \
|
||||||
ceph-mon
|
ceph-mon
|
||||||
|
|
||||||
=====================
|
|
||||||
Simple 3 Node Example
|
Simple 3 Node Example
|
||||||
=====================
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This example will show how to deploy Ceph in a very simple setup using 3
|
This example will show how to deploy Ceph in a very simple setup using 3
|
||||||
storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
|
storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
|
||||||
@ -288,7 +322,7 @@ implement caching.
|
|||||||
Here is the top part of the multinode inventory file used in the example
|
Here is the top part of the multinode inventory file used in the example
|
||||||
environment before adding the 3rd node for Ceph:
|
environment before adding the 3rd node for Ceph:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[control]
|
[control]
|
||||||
# These hostname must be resolvable from your deployment host
|
# These hostname must be resolvable from your deployment host
|
||||||
@ -311,27 +345,28 @@ environment before adding the 3rd node for Ceph:
|
|||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
=============
|
-------------
|
||||||
|
|
||||||
To prepare the 2nd disk (/dev/sdb) of each nodes for use by Ceph you will need
|
To prepare the 2nd disk (/dev/sdb) of each nodes for use by Ceph you will need
|
||||||
to add a partition label to it as shown below:
|
to add a partition label to it as shown below:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON /dev/sdb will be LOST!>
|
|
||||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Make sure to run this command on each of the 3 nodes or the deployment will
|
Make sure to run this command on each of the 3 nodes or the deployment will
|
||||||
fail.
|
fail.
|
||||||
|
|
||||||
Next, edit the multinode inventory file and make sure the 3 nodes are listed
|
Next, edit the multinode inventory file and make sure the 3 nodes are listed
|
||||||
under [storage]. In this example I will add kolla3.ducourrier.com to the
|
under ``[storage]``. In this example I will add kolla3.ducourrier.com to the
|
||||||
existing inventory file:
|
existing inventory file:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[control]
|
[control]
|
||||||
# These hostname must be resolvable from your deployment host
|
# These hostname must be resolvable from your deployment host
|
||||||
@ -355,10 +390,12 @@ existing inventory file:
|
|||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
kolla3.ducourrier.com
|
kolla3.ducourrier.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
It is now time to enable Ceph in the environment by editing the
|
It is now time to enable Ceph in the environment by editing the
|
||||||
``/etc/kolla/globals.yml`` file:
|
``/etc/kolla/globals.yml`` file:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
enable_ceph_rgw: "yes"
|
enable_ceph_rgw: "yes"
|
||||||
@ -366,8 +403,15 @@ It is now time to enable Ceph in the environment by editing the
|
|||||||
glance_backend_file: "no"
|
glance_backend_file: "no"
|
||||||
glance_backend_ceph: "yes"
|
glance_backend_ceph: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
Deployment
|
||||||
|
----------
|
||||||
|
|
||||||
Finally deploy the Ceph-enabled configuration:
|
Finally deploy the Ceph-enabled configuration:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory-file
|
kolla-ansible deploy -i path/to/inventory-file
|
||||||
|
|
||||||
|
.. end
|
||||||
|
@ -5,7 +5,8 @@ Hitachi NAS Platform iSCSI and NFS drives for OpenStack
|
|||||||
========================================================
|
========================================================
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
The Block Storage service provides persistent block storage resources that
|
The Block Storage service provides persistent block storage resources that
|
||||||
Compute instances can consume. This includes secondary attached storage similar
|
Compute instances can consume. This includes secondary attached storage similar
|
||||||
to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write
|
to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write
|
||||||
@ -14,6 +15,7 @@ instance.
|
|||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
------------
|
------------
|
||||||
|
|
||||||
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
|
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
|
||||||
|
|
||||||
- HNAS/SMU software version is 12.2 or higher.
|
- HNAS/SMU software version is 12.2 or higher.
|
||||||
@ -53,21 +55,22 @@ The NFS and iSCSI drivers support these operations:
|
|||||||
- Manage and unmanage snapshots (HNAS NFS only).
|
- Manage and unmanage snapshots (HNAS NFS only).
|
||||||
|
|
||||||
Configuration example for Hitachi NAS Platform iSCSI and NFS
|
Configuration example for Hitachi NAS Platform iSCSI and NFS
|
||||||
============================================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
iSCSI backend
|
iSCSI backend
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
Enable cinder hnas backend iscsi in ``/etc/kolla/globals.yml``
|
Enable cinder hnas backend iscsi in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_hnas_iscsi: "yes"
|
enable_cinder_backend_hnas_iscsi: "yes"
|
||||||
|
|
||||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and add the
|
Create or modify the file ``/etc/kolla/config/cinder.conf`` and add the
|
||||||
contents:
|
contents:
|
||||||
|
|
||||||
.. code-block:: console
|
.. path /etc/kolla/config/cinder.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
enabled_backends = hnas-iscsi
|
enabled_backends = hnas-iscsi
|
||||||
@ -83,25 +86,32 @@ contents:
|
|||||||
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
||||||
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
hnas_iscsi_password: supervisor
|
hnas_iscsi_password: supervisor
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
NFS backend
|
NFS backend
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
|
Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_hnas_nfs: "yes"
|
enable_cinder_backend_hnas_nfs: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and
|
Create or modify the file ``/etc/kolla/config/cinder.conf`` and
|
||||||
add the contents:
|
add the contents:
|
||||||
|
|
||||||
.. code-block:: console
|
.. path /etc/kolla/config/cinder.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
enabled_backends = hnas-nfs
|
enabled_backends = hnas-nfs
|
||||||
@ -116,22 +126,28 @@ add the contents:
|
|||||||
hnas_nfs_svc0_volume_type = nfs_gold
|
hnas_nfs_svc0_volume_type = nfs_gold
|
||||||
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
hnas_nfs_password: supervisor
|
hnas_nfs_password: supervisor
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration on Kolla deployment
|
Configuration on Kolla deployment
|
||||||
---------------------------------
|
---------------------------------
|
||||||
|
|
||||||
Enable Shared File Systems service and HNAS driver in
|
Enable Shared File Systems service and HNAS driver in
|
||||||
``/etc/kolla/globals.yml``
|
``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder: "yes"
|
enable_cinder: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration on HNAS
|
Configuration on HNAS
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
@ -141,7 +157,9 @@ List the available tenants:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack project list
|
openstack project list
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create a network to the given tenant (service), providing the tenant ID,
|
Create a network to the given tenant (service), providing the tenant ID,
|
||||||
a name for the network, the name of the physical network over which the
|
a name for the network, the name of the physical network over which the
|
||||||
@ -150,39 +168,47 @@ which the virtual network is implemented:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
||||||
a name for the subnet, the network ID created before, and the CIDR of
|
a name for the subnet, the network ID created before, and the CIDR of
|
||||||
subnet:
|
subnet:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Add the subnet interface to a router, providing the router ID and subnet
|
Add the subnet interface to a router, providing the router ID and subnet
|
||||||
ID created before:
|
ID created before:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create volume
|
Create volume
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create a non-bootable volume.
|
Create a non-bootable volume.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack volume create --size 1 my-volume
|
openstack volume create --size 1 my-volume
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Verify Operation.
|
Verify Operation.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ cinder show my-volume
|
cinder show my-volume
|
||||||
|
|
||||||
+--------------------------------+--------------------------------------+
|
+--------------------------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
@ -213,7 +239,7 @@ Verify Operation.
|
|||||||
| volume_type | None |
|
| volume_type | None |
|
||||||
+--------------------------------+--------------------------------------+
|
+--------------------------------+--------------------------------------+
|
||||||
|
|
||||||
$ nova volume-attach INSTANCE_ID VOLUME_ID auto
|
nova volume-attach INSTANCE_ID VOLUME_ID auto
|
||||||
|
|
||||||
+----------+--------------------------------------+
|
+----------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
@ -224,7 +250,7 @@ Verify Operation.
|
|||||||
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||||
+----------+--------------------------------------+
|
+----------+--------------------------------------+
|
||||||
|
|
||||||
$ openstack volume list
|
openstack volume list
|
||||||
|
|
||||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||||
| ID | Display Name | Status | Size | Attached to |
|
| ID | Display Name | Status | Size | Attached to |
|
||||||
@ -232,6 +258,8 @@ Verify Operation.
|
|||||||
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
||||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information about how to manage volumes, see the
|
For more information about how to manage volumes, see the
|
||||||
`Manage volumes
|
`Manage volumes
|
||||||
<https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html>`__.
|
<https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html>`__.
|
||||||
|
@ -5,7 +5,7 @@ Cinder in Kolla
|
|||||||
===============
|
===============
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
Cinder can be deploying using Kolla and supports the following storage
|
Cinder can be deploying using Kolla and supports the following storage
|
||||||
backends:
|
backends:
|
||||||
@ -18,24 +18,27 @@ backends:
|
|||||||
* nfs
|
* nfs
|
||||||
|
|
||||||
LVM
|
LVM
|
||||||
===
|
~~~
|
||||||
|
|
||||||
When using the ``lvm`` backend, a volume group will need to be created on each
|
When using the ``lvm`` backend, a volume group will need to be created on each
|
||||||
storage node. This can either be a real physical volume or a loopback mounted
|
storage node. This can either be a real physical volume or a loopback mounted
|
||||||
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
||||||
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
||||||
|
|
||||||
pvcreate /dev/sdb /dev/sdc
|
pvcreate /dev/sdb /dev/sdc
|
||||||
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
During development, it may be desirable to use file backed block storage. It
|
During development, it may be desirable to use file backed block storage. It
|
||||||
is possible to use a file and mount it as a block device via the loopback
|
is possible to use a file and mount it as a block device via the loopback
|
||||||
system. ::
|
system.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
free_device=$(losetup -f)
|
free_device=$(losetup -f)
|
||||||
fallocate -l 20G /var/lib/cinder_data.img
|
fallocate -l 20G /var/lib/cinder_data.img
|
||||||
@ -43,81 +46,113 @@ system. ::
|
|||||||
pvcreate $free_device
|
pvcreate $free_device
|
||||||
vgcreate cinder-volumes $free_device
|
vgcreate cinder-volumes $free_device
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_lvm: "yes"
|
enable_cinder_backend_lvm: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
There are currently issues using the LVM backend in a multi-controller setup,
|
There are currently issues using the LVM backend in a multi-controller setup,
|
||||||
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
||||||
|
|
||||||
NFS
|
NFS
|
||||||
===
|
~~~
|
||||||
|
|
||||||
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
||||||
where the volumes are to be stored::
|
where the volumes are to be stored:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
||||||
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
||||||
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
||||||
prevent remote root users from having access to all files.
|
prevent remote root users from having access to all files.
|
||||||
|
|
||||||
Then start ``nfsd``::
|
Then start ``nfsd``:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
systemctl start nfs
|
systemctl start nfs
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
||||||
each storage node::
|
each storage node:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
storage01:/kolla_nfs
|
storage01:/kolla_nfs
|
||||||
storage02:/kolla_nfs
|
storage02:/kolla_nfs
|
||||||
|
|
||||||
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``::
|
.. end
|
||||||
|
|
||||||
|
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_nfs: "yes"
|
enable_cinder_backend_nfs: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Validation
|
Validation
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Create a volume as follows:
|
Create a volume as follows:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack volume create --size 1 steak_volume
|
openstack volume create --size 1 steak_volume
|
||||||
<bunch of stuff printed>
|
<bunch of stuff printed>
|
||||||
|
|
||||||
Verify it is available. If it says "error" here something went wrong during
|
.. end
|
||||||
LVM creation of the volume. ::
|
|
||||||
|
Verify it is available. If it says "error" here something went wrong during
|
||||||
|
LVM creation of the volume.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
openstack volume list
|
||||||
|
|
||||||
$ openstack volume list
|
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
| ID | Display Name | Status | Size | Attached to |
|
| ID | Display Name | Status | Size | Attached to |
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Attach the volume to a server using:
|
Attach the volume to a server using:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Check the console log added the disk:
|
Check the console log added the disk:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack console log show steak_server
|
openstack console log show steak_server
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
||||||
If the disk stays in the available state, something went wrong during the
|
If the disk stays in the available state, something went wrong during the
|
||||||
iSCSI mounting of the volume to the guest VM.
|
iSCSI mounting of the volume to the guest VM.
|
||||||
|
|
||||||
Cinder LVM2 back end with iSCSI
|
Cinder LVM2 back end with iSCSI
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
||||||
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
||||||
@ -127,12 +162,16 @@ a bridge between nova-compute process and the server hosting LVG.
|
|||||||
|
|
||||||
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
||||||
exist on the server and following parameter must be specified in
|
exist on the server and following parameter must be specified in
|
||||||
``globals.yml`` ::
|
``globals.yml``:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_lvm: "yes"
|
enable_cinder_backend_lvm: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For Ubuntu and LVM2/iSCSI
|
For Ubuntu and LVM2/iSCSI
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------
|
||||||
|
|
||||||
``iscsd`` process uses configfs which is normally mounted at
|
``iscsd`` process uses configfs which is normally mounted at
|
||||||
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
||||||
@ -151,16 +190,23 @@ targeted for nova compute role.
|
|||||||
|
|
||||||
- Make sure configfs gets mounted during a server boot up process. There are
|
- Make sure configfs gets mounted during a server boot up process. There are
|
||||||
multiple ways to accomplish it, one example:
|
multiple ways to accomplish it, one example:
|
||||||
::
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
mount -t configfs /etc/rc.local /sys/kernel/config
|
mount -t configfs /etc/rc.local /sys/kernel/config
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Cinder back end with external iSCSI storage
|
Cinder back end with external iSCSI storage
|
||||||
===========================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In order to use external storage system (like one from EMC or NetApp)
|
In order to use external storage system (like one from EMC or NetApp)
|
||||||
the following parameter must be specified in ``globals.yml`` ::
|
the following parameter must be specified in ``globals.yml``:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_iscsi: "yes"
|
enable_cinder_backend_iscsi: "yes"
|
||||||
|
|
||||||
Also ``enable_cinder_backend_lvm`` should be set to "no" in this case.
|
.. end
|
||||||
|
|
||||||
|
Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case.
|
||||||
|
@ -5,7 +5,8 @@ Designate in Kolla
|
|||||||
==================
|
==================
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
Designate provides DNSaaS services for OpenStack:
|
Designate provides DNSaaS services for OpenStack:
|
||||||
|
|
||||||
- REST API for domain/record management
|
- REST API for domain/record management
|
||||||
@ -16,14 +17,16 @@ Designate provides DNSaaS services for OpenStack:
|
|||||||
- Support for PowerDNS and Bind9 out of the box
|
- Support for PowerDNS and Bind9 out of the box
|
||||||
|
|
||||||
Configuration on Kolla deployment
|
Configuration on Kolla deployment
|
||||||
---------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Enable Designate service in ``/etc/kolla/globals.yml``
|
Enable Designate service in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_designate: "yes"
|
enable_designate: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configure Designate options in ``/etc/kolla/globals.yml``
|
Configure Designate options in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -31,26 +34,32 @@ Configure Designate options in ``/etc/kolla/globals.yml``
|
|||||||
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
||||||
public network.
|
public network.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
dns_interface: "eth1"
|
dns_interface: "eth1"
|
||||||
designate_backend: "bind9"
|
designate_backend: "bind9"
|
||||||
designate_ns_record: "sample.openstack.org"
|
designate_ns_record: "sample.openstack.org"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Neutron and Nova Integration
|
Neutron and Nova Integration
|
||||||
----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create default Designate Zone for Neutron:
|
Create default Designate Zone for Neutron:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create designate-sink custom configuration folder:
|
Create designate-sink custom configuration folder:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ mkdir -p /etc/kolla/config/designate/
|
mkdir -p /etc/kolla/config/designate/
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
||||||
|
|
||||||
@ -61,43 +70,54 @@ Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
|||||||
[handler:neutron_floatingip]
|
[handler:neutron_floatingip]
|
||||||
zone_id = <ZONE_ID>
|
zone_id = <ZONE_ID>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Reconfigure Designate:
|
Reconfigure Designate:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Verify operation
|
Verify operation
|
||||||
----------------
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
List available networks:
|
List available networks:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack network list
|
openstack network list
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Associate a domain to a network:
|
Associate a domain to a network:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Start an instance:
|
Start an instance:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack server create \
|
openstack server create \
|
||||||
--image cirros \
|
--image cirros \
|
||||||
--flavor m1.tiny \
|
--flavor m1.tiny \
|
||||||
--key-name mykey \
|
--key-name mykey \
|
||||||
--nic net-id=${NETWORK_ID} \
|
--nic net-id=${NETWORK_ID} \
|
||||||
my-vm
|
my-vm
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Check DNS records in Designate:
|
Check DNS records in Designate:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack recordset list sample.openstack.org.
|
openstack recordset list sample.openstack.org.
|
||||||
|
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
| id | name | type | records | status | action |
|
| id | name | type | records | status | action |
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
@ -110,13 +130,17 @@ Check DNS records in Designate:
|
|||||||
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Query instance DNS information to Designate ``dns_interface`` IP address:
|
Query instance DNS information to Designate ``dns_interface`` IP address:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
||||||
192.168.190.232
|
192.168.190.232
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information about how Designate works, see
|
For more information about how Designate works, see
|
||||||
`Designate, a DNSaaS component for OpenStack
|
`Designate, a DNSaaS component for OpenStack
|
||||||
<https://docs.openstack.org/designate/latest/>`__.
|
<https://docs.openstack.org/designate/latest/>`__.
|
||||||
|
Loading…
Reference in New Issue
Block a user