Upgrade the rst convention of the Reference Guide [1]
We upgrade the rst convention by following Documentation Contributor Guide[1]. [1] https://docs.openstack.org/doc-contrib-guide Change-Id: I10660e2df0e57be0800e26aa4d320074084c3acf Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
parent
3342bc76fa
commit
47eeacdc7b
@ -2,8 +2,7 @@
|
|||||||
Bifrost Guide
|
Bifrost Guide
|
||||||
=============
|
=============
|
||||||
|
|
||||||
From the bifrost developer documentation:
|
From the ``Bifrost`` developer documentation:
|
||||||
|
|
||||||
Bifrost (pronounced bye-frost) is a set of Ansible playbooks that automates
|
Bifrost (pronounced bye-frost) is a set of Ansible playbooks that automates
|
||||||
the task of deploying a base image onto a set of known hardware using
|
the task of deploying a base image onto a set of known hardware using
|
||||||
Ironic. It provides modular utility for one-off operating system
|
Ironic. It provides modular utility for one-off operating system
|
||||||
@ -16,7 +15,7 @@ container, as well as building a base OS image and provisioning it onto the
|
|||||||
baremetal nodes.
|
baremetal nodes.
|
||||||
|
|
||||||
Hosts in the System
|
Hosts in the System
|
||||||
===================
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In a system deployed by bifrost we define a number of classes of hosts.
|
In a system deployed by bifrost we define a number of classes of hosts.
|
||||||
|
|
||||||
@ -47,7 +46,7 @@ Bare metal compute hosts:
|
|||||||
OS images is currently out of scope.
|
OS images is currently out of scope.
|
||||||
|
|
||||||
Cloud Deployment Procedure
|
Cloud Deployment Procedure
|
||||||
==========================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Cloud deployment using kolla and bifrost follows the following high level
|
Cloud deployment using kolla and bifrost follows the following high level
|
||||||
steps:
|
steps:
|
||||||
@ -59,7 +58,7 @@ steps:
|
|||||||
#. Deploy OpenStack services on the cloud hosts provisioned by bifrost.
|
#. Deploy OpenStack services on the cloud hosts provisioned by bifrost.
|
||||||
|
|
||||||
Preparation
|
Preparation
|
||||||
===========
|
~~~~~~~~~~~
|
||||||
|
|
||||||
Prepare the Control Host
|
Prepare the Control Host
|
||||||
------------------------
|
------------------------
|
||||||
@ -78,16 +77,22 @@ has been configured to use, which with bifrost will be ``127.0.0.1``. Bifrost
|
|||||||
will attempt to modify ``/etc/hosts`` on the deployment host to ensure that
|
will attempt to modify ``/etc/hosts`` on the deployment host to ensure that
|
||||||
this is the case. Docker bind mounts ``/etc/hosts`` into the container from a
|
this is the case. Docker bind mounts ``/etc/hosts`` into the container from a
|
||||||
volume. This prevents atomic renames which will prevent Ansible from fixing
|
volume. This prevents atomic renames which will prevent Ansible from fixing
|
||||||
the
|
the ``/etc/hosts`` file automatically.
|
||||||
``/etc/hosts`` file automatically.
|
|
||||||
|
|
||||||
To enable bifrost to be bootstrapped correctly add an entry to ``/etc/hosts``
|
To enable bifrost to be bootstrapped correctly, add an entry to ``/etc/hosts``
|
||||||
resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
resolving the deployment host's hostname to ``127.0.0.1``, for example:
|
||||||
|
|
||||||
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
|
.. code-block:: console
|
||||||
|
|
||||||
|
cat /etc/hosts
|
||||||
127.0.0.1 bifrost localhost
|
127.0.0.1 bifrost localhost
|
||||||
|
|
||||||
# The following lines are desirable for IPv6 capable hosts
|
.. end
|
||||||
|
|
||||||
|
The following lines are desirable for IPv6 capable hosts:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
::1 ip6-localhost ip6-loopback
|
::1 ip6-localhost ip6-loopback
|
||||||
fe00::0 ip6-localnet
|
fe00::0 ip6-localnet
|
||||||
ff00::0 ip6-mcastprefix
|
ff00::0 ip6-mcastprefix
|
||||||
@ -96,64 +101,72 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
|||||||
ff02::3 ip6-allhosts
|
ff02::3 ip6-allhosts
|
||||||
192.168.100.15 bifrost
|
192.168.100.15 bifrost
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Build a Bifrost Container Image
|
Build a Bifrost Container Image
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This section provides instructions on how to build a container image for
|
This section provides instructions on how to build a container image for
|
||||||
bifrost using kolla.
|
bifrost using kolla.
|
||||||
|
|
||||||
Enable Source Build Type
|
Currently kolla only supports the ``source`` install type for the bifrost image.
|
||||||
------------------------
|
|
||||||
|
|
||||||
Currently kolla only supports the source install type for the bifrost image.
|
#. To generate kolla-build.conf configuration File
|
||||||
|
|
||||||
Configuration File
|
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
If required, generate a default configuration file for ``kolla-build``::
|
* If required, generate a default configuration file for :command:`kolla-build`:
|
||||||
|
|
||||||
cd kolla
|
.. code-block:: console
|
||||||
tox -e genconfig
|
|
||||||
|
|
||||||
Modify ``kolla-build.conf``, setting ``install_type`` to ``source``::
|
cd kolla
|
||||||
|
tox -e genconfig
|
||||||
|
|
||||||
install_type = source
|
.. end
|
||||||
|
|
||||||
Command line
|
* Modify ``kolla-build.conf``, setting ``install_type`` to ``source``:
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Alternatively, instead of using ``kolla-build.conf``, a source build can be
|
.. path etc/kolla/kolla-build.conf
|
||||||
enabled by appending ``--type source`` to the ``kolla-build`` or
|
.. code-block:: ini
|
||||||
|
|
||||||
|
install_type = source
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can
|
||||||
|
be enabled by appending ``--type source`` to the :command:`kolla-build` or
|
||||||
``tools/build.py`` command.
|
``tools/build.py`` command.
|
||||||
|
|
||||||
Build Container
|
#. To build images, for Development:
|
||||||
---------------
|
|
||||||
|
|
||||||
Development
|
.. code-block:: console
|
||||||
~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
cd kolla
|
||||||
|
tools/build.py bifrost-deploy
|
||||||
|
|
||||||
cd kolla
|
.. end
|
||||||
tools/build.py bifrost-deploy
|
|
||||||
|
|
||||||
Production
|
For Production:
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-build bifrost-deploy
|
kolla-build bifrost-deploy
|
||||||
|
|
||||||
.. note::
|
.. end
|
||||||
|
|
||||||
By default kolla-build will build all containers using CentOS as the base
|
.. note::
|
||||||
image. To change this behavior, use the following parameter with
|
|
||||||
``kolla-build`` or ``tools/build.py`` command::
|
|
||||||
|
|
||||||
--base [ubuntu|centos|oraclelinux]
|
By default :command:`kolla-build` will build all containers using CentOS as
|
||||||
|
the base image. To change this behavior, use the following parameter with
|
||||||
|
:command:`kolla-build` or ``tools/build.py`` command:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
--base [ubuntu|centos|oraclelinux]
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configure and Deploy a Bifrost Container
|
Configure and Deploy a Bifrost Container
|
||||||
========================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This section provides instructions for how to configure and deploy a container
|
This section provides instructions for how to configure and deploy a container
|
||||||
running bifrost services.
|
running bifrost services.
|
||||||
@ -166,8 +179,8 @@ group. In the ``all-in-one`` and ``multinode`` inventory files, a ``bifrost``
|
|||||||
group is defined which contains all hosts in the ``deployment`` group. This
|
group is defined which contains all hosts in the ``deployment`` group. This
|
||||||
top level ``deployment`` group is intended to represent the host running the
|
top level ``deployment`` group is intended to represent the host running the
|
||||||
``bifrost_deploy`` container. By default, this group contains ``localhost``.
|
``bifrost_deploy`` container. By default, this group contains ``localhost``.
|
||||||
See :doc:`/user/multinode`
|
See :doc:`/user/multinode` for details on how to modify the Ansible inventory
|
||||||
for details on how to modify the Ansible inventory in a multinode deployment.
|
in a multinode deployment.
|
||||||
|
|
||||||
Bifrost does not currently support running on multiple hosts so the ``bifrost``
|
Bifrost does not currently support running on multiple hosts so the ``bifrost``
|
||||||
group should contain only a single host, however this is not enforced by
|
group should contain only a single host, however this is not enforced by
|
||||||
@ -189,6 +202,8 @@ different than ``network_interface``. For example to use ``eth1``:
|
|||||||
|
|
||||||
bifrost_network_interface: eth1
|
bifrost_network_interface: eth1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Note that this interface should typically have L2 network connectivity with the
|
Note that this interface should typically have L2 network connectivity with the
|
||||||
bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
|
bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
|
||||||
|
|
||||||
@ -199,6 +214,8 @@ reflected in ``globals.yml``
|
|||||||
|
|
||||||
kolla_install_type: source
|
kolla_install_type: source
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Prepare Bifrost Configuration
|
Prepare Bifrost Configuration
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
@ -225,27 +242,29 @@ properties and a logical name.
|
|||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
---
|
---
|
||||||
cloud1:
|
cloud1:
|
||||||
uuid: "31303735-3934-4247-3830-333132535336"
|
uuid: "31303735-3934-4247-3830-333132535336"
|
||||||
driver_info:
|
driver_info:
|
||||||
power:
|
power:
|
||||||
ipmi_username: "admin"
|
ipmi_username: "admin"
|
||||||
ipmi_address: "192.168.1.30"
|
ipmi_address: "192.168.1.30"
|
||||||
ipmi_password: "root"
|
ipmi_password: "root"
|
||||||
nics:
|
nics:
|
||||||
-
|
-
|
||||||
mac: "1c:c1:de:1c:aa:53"
|
mac: "1c:c1:de:1c:aa:53"
|
||||||
-
|
-
|
||||||
mac: "1c:c1:de:1c:aa:52"
|
mac: "1c:c1:de:1c:aa:52"
|
||||||
driver: "agent_ipmitool"
|
driver: "agent_ipmitool"
|
||||||
ipv4_address: "192.168.1.10"
|
ipv4_address: "192.168.1.10"
|
||||||
properties:
|
properties:
|
||||||
cpu_arch: "x86_64"
|
cpu_arch: "x86_64"
|
||||||
ram: "24576"
|
ram: "24576"
|
||||||
disk_size: "120"
|
disk_size: "120"
|
||||||
cpus: "16"
|
cpus: "16"
|
||||||
name: "cloud1"
|
name: "cloud1"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The required inventory will be specific to the hardware and environment in use.
|
The required inventory will be specific to the hardware and environment in use.
|
||||||
|
|
||||||
@ -254,9 +273,7 @@ Create Bifrost Configuration
|
|||||||
|
|
||||||
The file ``bifrost.yml`` provides global configuration for the bifrost
|
The file ``bifrost.yml`` provides global configuration for the bifrost
|
||||||
playbooks. By default kolla mostly uses bifrost's default variable values.
|
playbooks. By default kolla mostly uses bifrost's default variable values.
|
||||||
For details on bifrost's variables see the bifrost documentation.
|
For details on bifrost's variables see the bifrost documentation. For example:
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -269,6 +286,8 @@ For example:
|
|||||||
# dhcp_lease_time: 12h
|
# dhcp_lease_time: 12h
|
||||||
# dhcp_static_mask: 255.255.255.0
|
# dhcp_static_mask: 255.255.255.0
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create Disk Image Builder Configuration
|
Create Disk Image Builder Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -278,165 +297,183 @@ building the baremetal OS and deployment images, and will build an
|
|||||||
**Ubuntu-based** image for deployment to nodes. For details on bifrost's
|
**Ubuntu-based** image for deployment to nodes. For details on bifrost's
|
||||||
variables see the bifrost documentation.
|
variables see the bifrost documentation.
|
||||||
|
|
||||||
For example to use the ``debian`` Disk Image Builder OS element:
|
For example, to use the ``debian`` Disk Image Builder OS element:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
dib_os_element: debian
|
dib_os_element: debian
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
See the `diskimage-builder documentation
|
See the `diskimage-builder documentation
|
||||||
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
|
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
|
||||||
|
|
||||||
Deploy Bifrost
|
Deploy Bifrost
|
||||||
--------------
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The bifrost container can be deployed either using kolla-ansible or manually.
|
The bifrost container can be deployed either using kolla-ansible or manually.
|
||||||
|
|
||||||
Kolla-Ansible
|
Deploy Bifrost using Kolla-Ansible
|
||||||
~~~~~~~~~~~~~
|
----------------------------------
|
||||||
|
|
||||||
Development
|
For development:
|
||||||
___________
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
cd kolla-ansible
|
|
||||||
tools/kolla-ansible deploy-bifrost
|
|
||||||
|
|
||||||
Production
|
|
||||||
__________
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
kolla-ansible deploy-bifrost
|
|
||||||
|
|
||||||
Manual
|
|
||||||
~~~~~~
|
|
||||||
|
|
||||||
Start Bifrost Container
|
|
||||||
_______________________
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
docker run -it --net=host -v /dev:/dev -d \
|
|
||||||
--privileged --name bifrost_deploy \
|
|
||||||
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
|
||||||
|
|
||||||
Copy Configuration Files
|
|
||||||
________________________
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec -it bifrost_deploy mkdir /etc/bifrost
|
cd kolla-ansible
|
||||||
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
|
tools/kolla-ansible deploy-bifrost
|
||||||
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
|
||||||
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
|
||||||
|
|
||||||
Bootstrap Bifrost
|
.. end
|
||||||
_________________
|
|
||||||
|
|
||||||
::
|
For Production:
|
||||||
|
|
||||||
docker exec -it bifrost_deploy bash
|
|
||||||
|
|
||||||
Generate an SSH Key
|
|
||||||
___________________
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
ssh-keygen
|
|
||||||
|
|
||||||
Bootstrap and Start Services
|
|
||||||
____________________________
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
cd /bifrost
|
kolla-ansible deploy-bifrost
|
||||||
./scripts/env-setup.sh
|
|
||||||
. env-vars
|
.. end
|
||||||
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
|
|
||||||
HOME=/var/lib/rabbitmq
|
Deploy Bifrost manually
|
||||||
EOF
|
-----------------------
|
||||||
ansible-playbook -vvvv \
|
|
||||||
-i /bifrost/playbooks/inventory/target \
|
#. Start Bifrost Container
|
||||||
/bifrost/playbooks/install.yaml \
|
|
||||||
-e @/etc/bifrost/bifrost.yml \
|
.. code-block:: console
|
||||||
-e @/etc/bifrost/dib.yml \
|
|
||||||
-e skip_package_install=true
|
docker run -it --net=host -v /dev:/dev -d \
|
||||||
|
--privileged --name bifrost_deploy \
|
||||||
|
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
#. Copy Configuration Files
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
docker exec -it bifrost_deploy mkdir /etc/bifrost
|
||||||
|
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
|
||||||
|
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
||||||
|
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
#. Bootstrap Bifrost
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
docker exec -it bifrost_deploy bash
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
#. Generate an SSH Key
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
ssh-keygen
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
#. Bootstrap and Start Services
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
cd /bifrost
|
||||||
|
./scripts/env-setup.sh
|
||||||
|
. env-vars
|
||||||
|
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
|
||||||
|
HOME=/var/lib/rabbitmq
|
||||||
|
EOF
|
||||||
|
ansible-playbook -vvvv \
|
||||||
|
-i /bifrost/playbooks/inventory/target \
|
||||||
|
/bifrost/playbooks/install.yaml \
|
||||||
|
-e @/etc/bifrost/bifrost.yml \
|
||||||
|
-e @/etc/bifrost/dib.yml \
|
||||||
|
-e skip_package_install=true
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Validate the Deployed Container
|
Validate the Deployed Container
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec -it bifrost_deploy bash
|
docker exec -it bifrost_deploy bash
|
||||||
cd /bifrost
|
cd /bifrost
|
||||||
. env-vars
|
. env-vars
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Running "ironic node-list" should return with no nodes, for example
|
Running "ironic node-list" should return with no nodes, for example
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
|
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
|
||||||
+------+------+---------------+-------------+--------------------+-------------+
|
+------+------+---------------+-------------+--------------------+-------------+
|
||||||
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
|
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
|
||||||
+------+------+---------------+-------------+--------------------+-------------+
|
+------+------+---------------+-------------+--------------------+-------------+
|
||||||
+------+------+---------------+-------------+--------------------+-------------+
|
+------+------+---------------+-------------+--------------------+-------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enroll and Deploy Physical Nodes
|
Enroll and Deploy Physical Nodes
|
||||||
================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Once we have deployed a bifrost container we can use it to provision the bare
|
Once we have deployed a bifrost container we can use it to provision the bare
|
||||||
metal cloud hosts specified in the inventory file. Again, this can be done
|
metal cloud hosts specified in the inventory file. Again, this can be done
|
||||||
either using kolla-ansible or manually.
|
either using kolla-ansible or manually.
|
||||||
|
|
||||||
Kolla-Ansible
|
By Kolla-Ansible
|
||||||
-------------
|
----------------
|
||||||
|
|
||||||
Development
|
For Development:
|
||||||
~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
tools/kolla-ansible deploy-servers
|
|
||||||
|
|
||||||
Production
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
kolla-ansible deploy-servers
|
|
||||||
|
|
||||||
Manual
|
|
||||||
------
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec -it bifrost_deploy bash
|
tools/kolla-ansible deploy-servers
|
||||||
cd /bifrost
|
|
||||||
. env-vars
|
|
||||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
|
||||||
ansible-playbook -vvvv \
|
|
||||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
|
||||||
/bifrost/playbooks/enroll-dynamic.yaml \
|
|
||||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
|
||||||
-e @/etc/bifrost/bifrost.yml
|
|
||||||
|
|
||||||
docker exec -it bifrost_deploy bash
|
.. end
|
||||||
cd /bifrost
|
|
||||||
. env-vars
|
For Production:
|
||||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
|
||||||
ansible-playbook -vvvv \
|
.. code-block:: console
|
||||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
|
||||||
/bifrost/playbooks/deploy-dynamic.yaml \
|
kolla-ansible deploy-servers
|
||||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
|
||||||
-e @/etc/bifrost/bifrost.yml
|
.. end
|
||||||
|
|
||||||
|
Manually
|
||||||
|
--------
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
docker exec -it bifrost_deploy bash
|
||||||
|
cd /bifrost
|
||||||
|
. env-vars
|
||||||
|
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||||
|
ansible-playbook -vvvv \
|
||||||
|
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||||
|
/bifrost/playbooks/enroll-dynamic.yaml \
|
||||||
|
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||||
|
-e @/etc/bifrost/bifrost.yml
|
||||||
|
|
||||||
|
docker exec -it bifrost_deploy bash
|
||||||
|
cd /bifrost
|
||||||
|
. env-vars
|
||||||
|
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||||
|
ansible-playbook -vvvv \
|
||||||
|
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||||
|
/bifrost/playbooks/deploy-dynamic.yaml \
|
||||||
|
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||||
|
-e @/etc/bifrost/bifrost.yml
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
At this point Ironic should clean down the nodes and install the default
|
At this point Ironic should clean down the nodes and install the default
|
||||||
OS image.
|
OS image.
|
||||||
|
|
||||||
Advanced Configuration
|
Advanced Configuration
|
||||||
======================
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Bring Your Own Image
|
Bring Your Own Image
|
||||||
--------------------
|
--------------------
|
||||||
@ -450,7 +487,7 @@ To use your own SSH key after you have generated the ``passwords.yml`` file
|
|||||||
update the private and public keys under ``bifrost_ssh_key``.
|
update the private and public keys under ``bifrost_ssh_key``.
|
||||||
|
|
||||||
Known issues
|
Known issues
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
SSH daemon not running
|
SSH daemon not running
|
||||||
----------------------
|
----------------------
|
||||||
@ -458,18 +495,20 @@ SSH daemon not running
|
|||||||
By default ``sshd`` is installed in the image but may not be enabled. If you
|
By default ``sshd`` is installed in the image but may not be enabled. If you
|
||||||
encounter this issue you will have to access the server physically in recovery
|
encounter this issue you will have to access the server physically in recovery
|
||||||
mode to enable the ``sshd`` service. If your hardware supports it, this can be
|
mode to enable the ``sshd`` service. If your hardware supports it, this can be
|
||||||
done remotely with ``ipmitool`` and Serial Over LAN. For example
|
done remotely with :command:`ipmitool` and Serial Over LAN. For example
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
References
|
References
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Bifrost documentation: https://docs.openstack.org/bifrost/latest/
|
* `Bifrost documentation <https://docs.openstack.org/bifrost/latest/>`__
|
||||||
|
|
||||||
Bifrost troubleshooting guide: https://docs.openstack.org/bifrost/latest/user/troubleshooting.html
|
* `Bifrost troubleshooting guide <https://docs.openstack.org/bifrost/latest/user/troubleshooting.html>`__
|
||||||
|
|
||||||
Bifrost code repository: https://github.com/openstack/bifrost
|
* `Bifrost code repository <https://github.com/openstack/bifrost>`__
|
||||||
|
|
||||||
|
@ -9,17 +9,19 @@ successfully monitor this and use it to diagnose problems, the standard "ssh
|
|||||||
and grep" solution quickly becomes unmanageable.
|
and grep" solution quickly becomes unmanageable.
|
||||||
|
|
||||||
Preparation and deployment
|
Preparation and deployment
|
||||||
==========================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
||||||
the following:
|
the following:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_central_logging: "yes"
|
enable_central_logging: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Elasticsearch
|
Elasticsearch
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Kolla deploys Elasticsearch as part of the E*K stack to store, organize
|
Kolla deploys Elasticsearch as part of the E*K stack to store, organize
|
||||||
and make logs easily accessible.
|
and make logs easily accessible.
|
||||||
@ -28,12 +30,11 @@ By default Elasticsearch is deployed on port ``9200``.
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
||||||
remember to give ``/var/lib/docker`` an adequate space.
|
remember to give ``/var/lib/docker`` an adequate space.
|
||||||
|
|
||||||
|
|
||||||
Kibana
|
Kibana
|
||||||
======
|
~~~~~~
|
||||||
|
|
||||||
Kolla deploys Kibana as part of the E*K stack in order to allow operators to
|
Kolla deploys Kibana as part of the E*K stack in order to allow operators to
|
||||||
search and visualise logs in a centralised manner.
|
search and visualise logs in a centralised manner.
|
||||||
@ -82,19 +83,23 @@ host was found'.
|
|||||||
|
|
||||||
First, re-run the server creation with ``--debug``:
|
First, re-run the server creation with ``--debug``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack --debug server create --image cirros --flavor m1.tiny \
|
openstack --debug server create --image cirros --flavor m1.tiny \
|
||||||
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
||||||
demo1
|
demo1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
|
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
|
||||||
identifier that can be used to track the request through the system. An
|
identifier that can be used to track the request through the system. An
|
||||||
example ID looks like this:
|
example ID looks like this:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
|
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
|
||||||
search bar, minus the leading ``req-``. Assuming some basic filters have been
|
search bar, minus the leading ``req-``. Assuming some basic filters have been
|
||||||
@ -124,7 +129,9 @@ generated and previewed. In the menu on the left, metrics for a chart can
|
|||||||
be chosen. The chart can be generated by pressing a green arrow on the top
|
be chosen. The chart can be generated by pressing a green arrow on the top
|
||||||
of the left-side menu.
|
of the left-side menu.
|
||||||
|
|
||||||
.. note:: After creating a visualization, it can be saved by choosing *save
|
.. note::
|
||||||
|
|
||||||
|
After creating a visualization, it can be saved by choosing *save
|
||||||
visualization* option in the menu on the right. If it is not saved, it
|
visualization* option in the menu on the right. If it is not saved, it
|
||||||
will be lost after leaving a page or creating another visualization.
|
will be lost after leaving a page or creating another visualization.
|
||||||
|
|
||||||
@ -138,7 +145,9 @@ from all saved ones. The order and size of elements can be changed directly
|
|||||||
in this place by moving them or resizing. The color of charts can also be
|
in this place by moving them or resizing. The color of charts can also be
|
||||||
changed by checking a colorful dots on the legend near each visualization.
|
changed by checking a colorful dots on the legend near each visualization.
|
||||||
|
|
||||||
.. note:: After creating a dashboard, it can be saved by choosing *save dashboard*
|
.. note::
|
||||||
|
|
||||||
|
After creating a dashboard, it can be saved by choosing *save dashboard*
|
||||||
option in the menu on the right. If it is not saved, it will be lost after
|
option in the menu on the right. If it is not saved, it will be lost after
|
||||||
leaving a page or creating another dashboard.
|
leaving a page or creating another dashboard.
|
||||||
|
|
||||||
@ -156,7 +165,7 @@ In the same tab (Settings - Objects) one can also import saved items by
|
|||||||
choosing *import* option.
|
choosing *import* option.
|
||||||
|
|
||||||
Custom log forwarding
|
Custom log forwarding
|
||||||
=====================
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In some scenarios it may be useful to forward logs to a logging service other
|
In some scenarios it may be useful to forward logs to a logging service other
|
||||||
than elasticsearch. This can be done by configuring custom fluentd outputs.
|
than elasticsearch. This can be done by configuring custom fluentd outputs.
|
||||||
|
@ -10,13 +10,13 @@ tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
|
|||||||
host and a single block device.
|
host and a single block device.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
* A minimum of 3 hosts for a vanilla deploy
|
* A minimum of 3 hosts for a vanilla deploy
|
||||||
* A minimum of 1 block device per host
|
* A minimum of 1 block device per host
|
||||||
|
|
||||||
Preparation
|
Preparation
|
||||||
===========
|
~~~~~~~~~~~
|
||||||
|
|
||||||
To prepare a disk for use as a
|
To prepare a disk for use as a
|
||||||
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
||||||
@ -26,26 +26,31 @@ will be reformatted so use caution.
|
|||||||
|
|
||||||
To prepare an OSD as a storage drive, execute the following operations:
|
To prepare an OSD as a storage drive, execute the following operations:
|
||||||
|
|
||||||
::
|
.. warning::
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
.. code-block:: console
|
||||||
|
|
||||||
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The following shows an example of using parted to configure ``/dev/sdb`` for
|
The following shows an example of using parted to configure ``/dev/sdb`` for
|
||||||
usage with Kolla.
|
usage with Kolla.
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
parted /dev/sdb print
|
parted /dev/sdb print
|
||||||
Model: VMware, VMware Virtual S (scsi)
|
Model: VMware, VMware Virtual S (scsi)
|
||||||
Disk /dev/sdb: 10.7GB
|
Disk /dev/sdb: 10.7GB
|
||||||
Sector size (logical/physical): 512B/512B
|
Sector size (logical/physical): 512B/512B
|
||||||
Partition Table: gpt
|
Partition Table: gpt
|
||||||
Number Start End Size File system Name Flags
|
Number Start End Size File system Name Flags
|
||||||
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Using an external journal drive
|
Using an external journal drive
|
||||||
-------------------------------
|
-------------------------------
|
||||||
@ -59,19 +64,23 @@ journal drive. This section documents how to use an external journal drive.
|
|||||||
|
|
||||||
Prepare the storage drive in the same way as documented above:
|
Prepare the storage drive in the same way as documented above:
|
||||||
|
|
||||||
::
|
.. warning::
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
.. code-block:: console
|
||||||
|
|
||||||
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
To prepare the journal external drive execute the following command:
|
To prepare the journal external drive execute the following command:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
||||||
# where $DISK is /dev/sdc or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
@ -88,47 +97,57 @@ To prepare the journal external drive execute the following command:
|
|||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Edit the [storage] group in the inventory which contains the hostname of the
|
Edit the ``[storage]`` group in the inventory which contains the hostname of the
|
||||||
hosts that have the block devices you have prepped as shown above.
|
hosts that have the block devices you have prepped as shown above.
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[storage]
|
[storage]
|
||||||
controller
|
controller
|
||||||
compute1
|
compute1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enable Ceph in ``/etc/kolla/globals.yml``:
|
Enable Ceph in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph_rgw: "yes"
|
enable_ceph_rgw: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
||||||
start up, RGW will create several pools. The first pool should be in an
|
start up, RGW will create several pools. The first pool should be in an
|
||||||
operational state to proceed with the second one, and so on. So, in the case of
|
operational state to proceed with the second one, and so on. So, in the case of
|
||||||
an **all-in-one** deployment, it is necessary to change the default number of
|
an **all-in-one** deployment, it is necessary to change the default number of
|
||||||
copies for the pools before deployment. Modify the file
|
copies for the pools before deployment. Modify the file
|
||||||
``/etc/kolla/config/ceph.conf`` and add the contents::
|
``/etc/kolla/config/ceph.conf`` and add the contents:
|
||||||
|
|
||||||
[global]
|
.. path /etc/kolla/config/ceph.conf
|
||||||
osd pool default size = 1
|
.. code-block:: ini
|
||||||
osd pool default min size = 1
|
|
||||||
|
[global]
|
||||||
|
osd pool default size = 1
|
||||||
|
osd pool default min size = 1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
To build a high performance and secure Ceph Storage Cluster, the Ceph community
|
To build a high performance and secure Ceph Storage Cluster, the Ceph community
|
||||||
recommend the use of two separate networks: public network and cluster network.
|
recommend the use of two separate networks: public network and cluster network.
|
||||||
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
|
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. path /etc/kolla/globals.yml
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
cluster_interface: "eth2"
|
cluster_interface: "eth2"
|
||||||
|
|
||||||
@ -139,46 +158,52 @@ For more details, see `NETWORK CONFIGURATION REFERENCE
|
|||||||
of Ceph Documentation.
|
of Ceph Documentation.
|
||||||
|
|
||||||
Deployment
|
Deployment
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Finally deploy the Ceph-enabled OpenStack:
|
Finally deploy the Ceph-enabled OpenStack:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory
|
kolla-ansible deploy -i path/to/inventory
|
||||||
|
|
||||||
Using a Cache Tier
|
.. end
|
||||||
==================
|
|
||||||
|
|
||||||
An optional `cache tier <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
Using a Cache Tiering
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An optional `cache tiering <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
||||||
can be deployed by formatting at least one cache device and enabling cache.
|
can be deployed by formatting at least one cache device and enabling cache.
|
||||||
tiering in the globals.yml configuration file.
|
tiering in the globals.yml configuration file.
|
||||||
|
|
||||||
To prepare an OSD as a cache device, execute the following operations:
|
To prepare an OSD as a cache device, execute the following operations:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
||||||
# where $DISK is /dev/sdb or something similar
|
|
||||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
.. end
|
||||||
|
|
||||||
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
|
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
ceph_enable_cache: "yes"
|
ceph_enable_cache: "yes"
|
||||||
# Valid options are [ forward, none, writeback ]
|
# Valid options are [ forward, none, writeback ]
|
||||||
ceph_cache_mode: "writeback"
|
ceph_cache_mode: "writeback"
|
||||||
|
|
||||||
After this run the playbooks as you normally would. For example:
|
.. end
|
||||||
|
|
||||||
::
|
After this run the playbooks as you normally would, for example:
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory
|
.. code-block:: console
|
||||||
|
|
||||||
|
kolla-ansible deploy -i path/to/inventory
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Setting up an Erasure Coded Pool
|
Setting up an Erasure Coded Pool
|
||||||
================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
|
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
|
||||||
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
||||||
@ -191,62 +216,73 @@ completely removing the pool and recreating it.
|
|||||||
To enable erasure coded pools add the following options to your
|
To enable erasure coded pools add the following options to your
|
||||||
``/etc/kolla/globals.yml`` configuration file:
|
``/etc/kolla/globals.yml`` configuration file:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
||||||
# Valid options are [ erasure, replicated ]
|
# Valid options are [ erasure, replicated ]
|
||||||
ceph_pool_type: "erasure"
|
ceph_pool_type: "erasure"
|
||||||
# Optionally, you can change the profile
|
# Optionally, you can change the profile
|
||||||
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Managing Ceph
|
Managing Ceph
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Check the Ceph status for more diagnostic information. The sample output below
|
Check the Ceph status for more diagnostic information. The sample output below
|
||||||
indicates a healthy cluster:
|
indicates a healthy cluster:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph -s
|
docker exec ceph_mon ceph -s
|
||||||
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
|
||||||
health HEALTH_OK
|
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
||||||
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
health HEALTH_OK
|
||||||
election epoch 2, quorum 0 controller
|
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
||||||
osdmap e18: 2 osds: 2 up, 2 in
|
election epoch 2, quorum 0 controller
|
||||||
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
|
osdmap e18: 2 osds: 2 up, 2 in
|
||||||
68676 kB used, 20390 MB / 20457 MB avail
|
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
|
||||||
64 active+clean
|
68676 kB used, 20390 MB / 20457 MB avail
|
||||||
|
64 active+clean
|
||||||
|
|
||||||
If Ceph is run in an **all-in-one** deployment or with less than three storage
|
If Ceph is run in an **all-in-one** deployment or with less than three storage
|
||||||
nodes, further configuration is required. It is necessary to change the default
|
nodes, further configuration is required. It is necessary to change the default
|
||||||
number of copies for the pool. The following example demonstrates how to change
|
number of copies for the pool. The following example demonstrates how to change
|
||||||
the number of copies for the pool to 1:
|
the number of copies for the pool to 1:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph osd pool set rbd size 1
|
docker exec ceph_mon ceph osd pool set rbd size 1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
||||||
An example of modifying the pools to have 2 copies:
|
An example of modifying the pools to have 2 copies:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
If using a cache tier, these changes must be made as well:
|
If using a cache tier, these changes must be made as well:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
|
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Troubleshooting
|
Troubleshooting
|
||||||
===============
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
|
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
|
||||||
------------------------------------------------------------------------------
|
------------------------------------------------------------------------------
|
||||||
@ -258,16 +294,14 @@ successful deploy.
|
|||||||
In order to do this the operator should remove the `ceph_mon_config` volume
|
In order to do this the operator should remove the `ceph_mon_config` volume
|
||||||
from each Ceph monitor node:
|
from each Ceph monitor node:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
ansible \
|
ansible -i ansible/inventory/multinode \
|
||||||
-i ansible/inventory/multinode \
|
-a 'docker volume rm ceph_mon_config' \
|
||||||
-a 'docker volume rm ceph_mon_config' \
|
ceph-mon
|
||||||
ceph-mon
|
|
||||||
|
|
||||||
=====================
|
|
||||||
Simple 3 Node Example
|
Simple 3 Node Example
|
||||||
=====================
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This example will show how to deploy Ceph in a very simple setup using 3
|
This example will show how to deploy Ceph in a very simple setup using 3
|
||||||
storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
|
storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
|
||||||
@ -288,86 +322,96 @@ implement caching.
|
|||||||
Here is the top part of the multinode inventory file used in the example
|
Here is the top part of the multinode inventory file used in the example
|
||||||
environment before adding the 3rd node for Ceph:
|
environment before adding the 3rd node for Ceph:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[control]
|
[control]
|
||||||
# These hostname must be resolvable from your deployment host
|
# These hostname must be resolvable from your deployment host
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[network]
|
[network]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[compute]
|
[compute]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[monitoring]
|
[monitoring]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[storage]
|
|
||||||
kolla1.ducourrier.com
|
|
||||||
kolla2.ducourrier.com
|
|
||||||
|
|
||||||
|
[storage]
|
||||||
|
kolla1.ducourrier.com
|
||||||
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
=============
|
-------------
|
||||||
|
|
||||||
To prepare the 2nd disk (/dev/sdb) of each nodes for use by Ceph you will need
|
To prepare the 2nd disk (/dev/sdb) of each nodes for use by Ceph you will need
|
||||||
to add a partition label to it as shown below:
|
to add a partition label to it as shown below:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
# <WARNING ALL DATA ON /dev/sdb will be LOST!>
|
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Make sure to run this command on each of the 3 nodes or the deployment will
|
Make sure to run this command on each of the 3 nodes or the deployment will
|
||||||
fail.
|
fail.
|
||||||
|
|
||||||
Next, edit the multinode inventory file and make sure the 3 nodes are listed
|
Next, edit the multinode inventory file and make sure the 3 nodes are listed
|
||||||
under [storage]. In this example I will add kolla3.ducourrier.com to the
|
under ``[storage]``. In this example I will add kolla3.ducourrier.com to the
|
||||||
existing inventory file:
|
existing inventory file:
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
[control]
|
[control]
|
||||||
# These hostname must be resolvable from your deployment host
|
# These hostname must be resolvable from your deployment host
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[network]
|
[network]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[compute]
|
[compute]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[monitoring]
|
[monitoring]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
|
|
||||||
[storage]
|
[storage]
|
||||||
kolla1.ducourrier.com
|
kolla1.ducourrier.com
|
||||||
kolla2.ducourrier.com
|
kolla2.ducourrier.com
|
||||||
kolla3.ducourrier.com
|
kolla3.ducourrier.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
It is now time to enable Ceph in the environment by editing the
|
It is now time to enable Ceph in the environment by editing the
|
||||||
``/etc/kolla/globals.yml`` file:
|
``/etc/kolla/globals.yml`` file:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "yes"
|
enable_ceph: "yes"
|
||||||
enable_ceph_rgw: "yes"
|
enable_ceph_rgw: "yes"
|
||||||
enable_cinder: "yes"
|
enable_cinder: "yes"
|
||||||
glance_backend_file: "no"
|
glance_backend_file: "no"
|
||||||
glance_backend_ceph: "yes"
|
glance_backend_ceph: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
Deployment
|
||||||
|
----------
|
||||||
|
|
||||||
Finally deploy the Ceph-enabled configuration:
|
Finally deploy the Ceph-enabled configuration:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy -i path/to/inventory-file
|
kolla-ansible deploy -i path/to/inventory-file
|
||||||
|
|
||||||
|
.. end
|
||||||
|
@ -5,7 +5,8 @@ Hitachi NAS Platform iSCSI and NFS drives for OpenStack
|
|||||||
========================================================
|
========================================================
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
The Block Storage service provides persistent block storage resources that
|
The Block Storage service provides persistent block storage resources that
|
||||||
Compute instances can consume. This includes secondary attached storage similar
|
Compute instances can consume. This includes secondary attached storage similar
|
||||||
to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write
|
to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write
|
||||||
@ -14,6 +15,7 @@ instance.
|
|||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
------------
|
------------
|
||||||
|
|
||||||
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
|
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
|
||||||
|
|
||||||
- HNAS/SMU software version is 12.2 or higher.
|
- HNAS/SMU software version is 12.2 or higher.
|
||||||
@ -53,74 +55,86 @@ The NFS and iSCSI drivers support these operations:
|
|||||||
- Manage and unmanage snapshots (HNAS NFS only).
|
- Manage and unmanage snapshots (HNAS NFS only).
|
||||||
|
|
||||||
Configuration example for Hitachi NAS Platform iSCSI and NFS
|
Configuration example for Hitachi NAS Platform iSCSI and NFS
|
||||||
============================================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
iSCSI backend
|
iSCSI backend
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
Enable cinder hnas backend iscsi in ``/etc/kolla/globals.yml``
|
Enable cinder hnas backend iscsi in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_hnas_iscsi: "yes"
|
enable_cinder_backend_hnas_iscsi: "yes"
|
||||||
|
|
||||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and add the
|
Create or modify the file ``/etc/kolla/config/cinder.conf`` and add the
|
||||||
contents:
|
contents:
|
||||||
|
|
||||||
.. code-block:: console
|
.. path /etc/kolla/config/cinder.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
enabled_backends = hnas-iscsi
|
enabled_backends = hnas-iscsi
|
||||||
|
|
||||||
[hnas-iscsi]
|
[hnas-iscsi]
|
||||||
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
|
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
|
||||||
volume_iscsi_backend = hnas_iscsi_backend
|
volume_iscsi_backend = hnas_iscsi_backend
|
||||||
hnas_iscsi_username = supervisor
|
hnas_iscsi_username = supervisor
|
||||||
hnas_iscsi_mgmt_ip0 = <hnas_ip>
|
hnas_iscsi_mgmt_ip0 = <hnas_ip>
|
||||||
hnas_chap_enabled = True
|
hnas_chap_enabled = True
|
||||||
|
|
||||||
hnas_iscsi_svc0_volume_type = iscsi_gold
|
hnas_iscsi_svc0_volume_type = iscsi_gold
|
||||||
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
||||||
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
hnas_iscsi_password: supervisor
|
hnas_iscsi_password: supervisor
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
NFS backend
|
NFS backend
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
|
Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_hnas_nfs: "yes"
|
enable_cinder_backend_hnas_nfs: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and
|
Create or modify the file ``/etc/kolla/config/cinder.conf`` and
|
||||||
add the contents:
|
add the contents:
|
||||||
|
|
||||||
.. code-block:: console
|
.. path /etc/kolla/config/cinder.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
enabled_backends = hnas-nfs
|
enabled_backends = hnas-nfs
|
||||||
|
|
||||||
[hnas-nfs]
|
[hnas-nfs]
|
||||||
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
|
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
|
||||||
volume_nfs_backend = hnas_nfs_backend
|
volume_nfs_backend = hnas_nfs_backend
|
||||||
hnas_nfs_username = supervisor
|
hnas_nfs_username = supervisor
|
||||||
hnas_nfs_mgmt_ip0 = <hnas_ip>
|
hnas_nfs_mgmt_ip0 = <hnas_ip>
|
||||||
hnas_chap_enabled = True
|
hnas_chap_enabled = True
|
||||||
|
|
||||||
hnas_nfs_svc0_volume_type = nfs_gold
|
hnas_nfs_svc0_volume_type = nfs_gold
|
||||||
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
hnas_nfs_password: supervisor
|
hnas_nfs_password: supervisor
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration on Kolla deployment
|
Configuration on Kolla deployment
|
||||||
---------------------------------
|
---------------------------------
|
||||||
@ -128,9 +142,11 @@ Configuration on Kolla deployment
|
|||||||
Enable Shared File Systems service and HNAS driver in
|
Enable Shared File Systems service and HNAS driver in
|
||||||
``/etc/kolla/globals.yml``
|
``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder: "yes"
|
enable_cinder: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuration on HNAS
|
Configuration on HNAS
|
||||||
---------------------
|
---------------------
|
||||||
@ -141,7 +157,9 @@ List the available tenants:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack project list
|
openstack project list
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create a network to the given tenant (service), providing the tenant ID,
|
Create a network to the given tenant (service), providing the tenant ID,
|
||||||
a name for the network, the name of the physical network over which the
|
a name for the network, the name of the physical network over which the
|
||||||
@ -150,8 +168,10 @@ which the virtual network is implemented:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
||||||
a name for the subnet, the network ID created before, and the CIDR of
|
a name for the subnet, the network ID created before, and the CIDR of
|
||||||
@ -159,78 +179,86 @@ subnet:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Add the subnet interface to a router, providing the router ID and subnet
|
Add the subnet interface to a router, providing the router ID and subnet
|
||||||
ID created before:
|
ID created before:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create volume
|
Create volume
|
||||||
=============
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create a non-bootable volume.
|
Create a non-bootable volume.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack volume create --size 1 my-volume
|
openstack volume create --size 1 my-volume
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Verify Operation.
|
Verify Operation.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ cinder show my-volume
|
cinder show my-volume
|
||||||
|
|
||||||
+--------------------------------+--------------------------------------+
|
+--------------------------------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+--------------------------------+--------------------------------------+
|
+--------------------------------+--------------------------------------+
|
||||||
| attachments | [] |
|
| attachments | [] |
|
||||||
| availability_zone | nova |
|
| availability_zone | nova |
|
||||||
| bootable | false |
|
| bootable | false |
|
||||||
| consistencygroup_id | None |
|
| consistencygroup_id | None |
|
||||||
| created_at | 2017-01-17T19:02:45.000000 |
|
| created_at | 2017-01-17T19:02:45.000000 |
|
||||||
| description | None |
|
| description | None |
|
||||||
| encrypted | False |
|
| encrypted | False |
|
||||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||||
| metadata | {} |
|
| metadata | {} |
|
||||||
| migration_status | None |
|
| migration_status | None |
|
||||||
| multiattach | False |
|
| multiattach | False |
|
||||||
| name | my-volume |
|
| name | my-volume |
|
||||||
| os-vol-host-attr:host | compute@hnas-iscsi#iscsi_gold |
|
| os-vol-host-attr:host | compute@hnas-iscsi#iscsi_gold |
|
||||||
| os-vol-mig-status-attr:migstat | None |
|
| os-vol-mig-status-attr:migstat | None |
|
||||||
| os-vol-mig-status-attr:name_id | None |
|
| os-vol-mig-status-attr:name_id | None |
|
||||||
| os-vol-tenant-attr:tenant_id | 16def9176bc64bd283d419ac2651e299 |
|
| os-vol-tenant-attr:tenant_id | 16def9176bc64bd283d419ac2651e299 |
|
||||||
| replication_status | disabled |
|
| replication_status | disabled |
|
||||||
| size | 1 |
|
| size | 1 |
|
||||||
| snapshot_id | None |
|
| snapshot_id | None |
|
||||||
| source_volid | None |
|
| source_volid | None |
|
||||||
| status | available |
|
| status | available |
|
||||||
| updated_at | 2017-01-17T19:02:46.000000 |
|
| updated_at | 2017-01-17T19:02:46.000000 |
|
||||||
| user_id | fb318b96929c41c6949360c4ccdbf8c0 |
|
| user_id | fb318b96929c41c6949360c4ccdbf8c0 |
|
||||||
| volume_type | None |
|
| volume_type | None |
|
||||||
+--------------------------------+--------------------------------------+
|
+--------------------------------+--------------------------------------+
|
||||||
|
|
||||||
$ nova volume-attach INSTANCE_ID VOLUME_ID auto
|
nova volume-attach INSTANCE_ID VOLUME_ID auto
|
||||||
|
|
||||||
+----------+--------------------------------------+
|
+----------+--------------------------------------+
|
||||||
| Property | Value |
|
| Property | Value |
|
||||||
+----------+--------------------------------------+
|
+----------+--------------------------------------+
|
||||||
| device | /dev/vdc |
|
| device | /dev/vdc |
|
||||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||||
| serverId | 3bf5e176-be05-4634-8cbd-e5fe491f5f9c |
|
| serverId | 3bf5e176-be05-4634-8cbd-e5fe491f5f9c |
|
||||||
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||||
+----------+--------------------------------------+
|
+----------+--------------------------------------+
|
||||||
|
|
||||||
$ openstack volume list
|
openstack volume list
|
||||||
|
|
||||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||||
| ID | Display Name | Status | Size | Attached to |
|
| ID | Display Name | Status | Size | Attached to |
|
||||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||||
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
||||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information about how to manage volumes, see the
|
For more information about how to manage volumes, see the
|
||||||
`Manage volumes
|
`Manage volumes
|
||||||
|
@ -5,7 +5,7 @@ Cinder in Kolla
|
|||||||
===============
|
===============
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
Cinder can be deploying using Kolla and supports the following storage
|
Cinder can be deploying using Kolla and supports the following storage
|
||||||
backends:
|
backends:
|
||||||
@ -18,106 +18,141 @@ backends:
|
|||||||
* nfs
|
* nfs
|
||||||
|
|
||||||
LVM
|
LVM
|
||||||
===
|
~~~
|
||||||
|
|
||||||
When using the ``lvm`` backend, a volume group will need to be created on each
|
When using the ``lvm`` backend, a volume group will need to be created on each
|
||||||
storage node. This can either be a real physical volume or a loopback mounted
|
storage node. This can either be a real physical volume or a loopback mounted
|
||||||
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
||||||
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
||||||
|
|
||||||
pvcreate /dev/sdb /dev/sdc
|
pvcreate /dev/sdb /dev/sdc
|
||||||
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
During development, it may be desirable to use file backed block storage. It
|
During development, it may be desirable to use file backed block storage. It
|
||||||
is possible to use a file and mount it as a block device via the loopback
|
is possible to use a file and mount it as a block device via the loopback
|
||||||
system. ::
|
system.
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
free_device=$(losetup -f)
|
free_device=$(losetup -f)
|
||||||
fallocate -l 20G /var/lib/cinder_data.img
|
fallocate -l 20G /var/lib/cinder_data.img
|
||||||
losetup $free_device /var/lib/cinder_data.img
|
losetup $free_device /var/lib/cinder_data.img
|
||||||
pvcreate $free_device
|
pvcreate $free_device
|
||||||
vgcreate cinder-volumes $free_device
|
vgcreate cinder-volumes $free_device
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_cinder_backend_lvm: "yes"
|
enable_cinder_backend_lvm: "yes"
|
||||||
|
|
||||||
.. note ::
|
.. end
|
||||||
There are currently issues using the LVM backend in a multi-controller setup,
|
|
||||||
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
.. note::
|
||||||
|
|
||||||
|
There are currently issues using the LVM backend in a multi-controller setup,
|
||||||
|
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
||||||
|
|
||||||
NFS
|
NFS
|
||||||
===
|
~~~
|
||||||
|
|
||||||
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
||||||
where the volumes are to be stored::
|
where the volumes are to be stored:
|
||||||
|
|
||||||
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
.. code-block:: none
|
||||||
|
|
||||||
|
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
||||||
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
||||||
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
||||||
prevent remote root users from having access to all files.
|
prevent remote root users from having access to all files.
|
||||||
|
|
||||||
Then start ``nfsd``::
|
Then start ``nfsd``:
|
||||||
|
|
||||||
systemctl start nfs
|
.. code-block:: console
|
||||||
|
|
||||||
|
systemctl start nfs
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
||||||
each storage node::
|
each storage node:
|
||||||
|
|
||||||
storage01:/kolla_nfs
|
.. code-block:: none
|
||||||
storage02:/kolla_nfs
|
|
||||||
|
|
||||||
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``::
|
storage01:/kolla_nfs
|
||||||
|
storage02:/kolla_nfs
|
||||||
|
|
||||||
enable_cinder_backend_nfs: "yes"
|
.. end
|
||||||
|
|
||||||
|
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
enable_cinder_backend_nfs: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Validation
|
Validation
|
||||||
==========
|
~~~~~~~~~~
|
||||||
|
|
||||||
Create a volume as follows:
|
Create a volume as follows:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack volume create --size 1 steak_volume
|
openstack volume create --size 1 steak_volume
|
||||||
<bunch of stuff printed>
|
<bunch of stuff printed>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Verify it is available. If it says "error" here something went wrong during
|
Verify it is available. If it says "error" here something went wrong during
|
||||||
LVM creation of the volume. ::
|
LVM creation of the volume.
|
||||||
|
|
||||||
$ openstack volume list
|
.. code-block:: console
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
|
||||||
| ID | Display Name | Status | Size | Attached to |
|
openstack volume list
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
|
||||||
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
+--------------------------------------+--------------+-----------+------+-------------+
|
| ID | Display Name | Status | Size | Attached to |
|
||||||
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
|
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
||||||
|
+--------------------------------------+--------------+-----------+------+-------------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Attach the volume to a server using:
|
Attach the volume to a server using:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Check the console log added the disk:
|
Check the console log added the disk:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack console log show steak_server
|
openstack console log show steak_server
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
||||||
If the disk stays in the available state, something went wrong during the
|
If the disk stays in the available state, something went wrong during the
|
||||||
iSCSI mounting of the volume to the guest VM.
|
iSCSI mounting of the volume to the guest VM.
|
||||||
|
|
||||||
Cinder LVM2 back end with iSCSI
|
Cinder LVM2 back end with iSCSI
|
||||||
===============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
||||||
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
||||||
@ -127,12 +162,16 @@ a bridge between nova-compute process and the server hosting LVG.
|
|||||||
|
|
||||||
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
||||||
exist on the server and following parameter must be specified in
|
exist on the server and following parameter must be specified in
|
||||||
``globals.yml`` ::
|
``globals.yml``:
|
||||||
|
|
||||||
enable_cinder_backend_lvm: "yes"
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
enable_cinder_backend_lvm: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For Ubuntu and LVM2/iSCSI
|
For Ubuntu and LVM2/iSCSI
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------
|
||||||
|
|
||||||
``iscsd`` process uses configfs which is normally mounted at
|
``iscsd`` process uses configfs which is normally mounted at
|
||||||
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
||||||
@ -141,26 +180,33 @@ not the case on debian/ubuntu. Since ``iscsid`` container runs on every nova
|
|||||||
compute node, the following steps must be completed on every Ubuntu server
|
compute node, the following steps must be completed on every Ubuntu server
|
||||||
targeted for nova compute role.
|
targeted for nova compute role.
|
||||||
|
|
||||||
- Add configfs module to ``/etc/modules``
|
- Add configfs module to ``/etc/modules``
|
||||||
- Rebuild initramfs using: ``update-initramfs -u`` command
|
- Rebuild initramfs using: ``update-initramfs -u`` command
|
||||||
- Stop ``open-iscsi`` system service due to its conflicts
|
- Stop ``open-iscsi`` system service due to its conflicts
|
||||||
with iscsid container.
|
with iscsid container.
|
||||||
|
|
||||||
Ubuntu 16.04 (systemd):
|
Ubuntu 16.04 (systemd):
|
||||||
``systemctl stop open-iscsi; systemctl stop iscsid``
|
``systemctl stop open-iscsi; systemctl stop iscsid``
|
||||||
|
|
||||||
- Make sure configfs gets mounted during a server boot up process. There are
|
- Make sure configfs gets mounted during a server boot up process. There are
|
||||||
multiple ways to accomplish it, one example:
|
multiple ways to accomplish it, one example:
|
||||||
::
|
|
||||||
|
|
||||||
mount -t configfs /etc/rc.local /sys/kernel/config
|
.. code-block:: console
|
||||||
|
|
||||||
|
mount -t configfs /etc/rc.local /sys/kernel/config
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Cinder back end with external iSCSI storage
|
Cinder back end with external iSCSI storage
|
||||||
===========================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In order to use external storage system (like one from EMC or NetApp)
|
In order to use external storage system (like one from EMC or NetApp)
|
||||||
the following parameter must be specified in ``globals.yml`` ::
|
the following parameter must be specified in ``globals.yml``:
|
||||||
|
|
||||||
enable_cinder_backend_iscsi: "yes"
|
.. code-block:: yaml
|
||||||
|
|
||||||
Also ``enable_cinder_backend_lvm`` should be set to "no" in this case.
|
enable_cinder_backend_iscsi: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case.
|
||||||
|
@ -5,117 +5,141 @@ Designate in Kolla
|
|||||||
==================
|
==================
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
|
|
||||||
Designate provides DNSaaS services for OpenStack:
|
Designate provides DNSaaS services for OpenStack:
|
||||||
|
|
||||||
- REST API for domain/record management
|
- REST API for domain/record management
|
||||||
- Multi-tenant
|
- Multi-tenant
|
||||||
- Integrated with Keystone for authentication
|
- Integrated with Keystone for authentication
|
||||||
- Framework in place to integrate with Nova and Neutron
|
- Framework in place to integrate with Nova and Neutron
|
||||||
notifications (for auto-generated records)
|
notifications (for auto-generated records)
|
||||||
- Support for PowerDNS and Bind9 out of the box
|
- Support for PowerDNS and Bind9 out of the box
|
||||||
|
|
||||||
Configuration on Kolla deployment
|
Configuration on Kolla deployment
|
||||||
---------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Enable Designate service in ``/etc/kolla/globals.yml``
|
Enable Designate service in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_designate: "yes"
|
enable_designate: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configure Designate options in ``/etc/kolla/globals.yml``
|
Configure Designate options in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
||||||
public network.
|
public network.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
dns_interface: "eth1"
|
dns_interface: "eth1"
|
||||||
designate_backend: "bind9"
|
designate_backend: "bind9"
|
||||||
designate_ns_record: "sample.openstack.org"
|
designate_ns_record: "sample.openstack.org"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Neutron and Nova Integration
|
Neutron and Nova Integration
|
||||||
----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create default Designate Zone for Neutron:
|
Create default Designate Zone for Neutron:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create designate-sink custom configuration folder:
|
Create designate-sink custom configuration folder:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ mkdir -p /etc/kolla/config/designate/
|
mkdir -p /etc/kolla/config/designate/
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
[handler:nova_fixed]
|
[handler:nova_fixed]
|
||||||
zone_id = <ZONE_ID>
|
zone_id = <ZONE_ID>
|
||||||
[handler:neutron_floatingip]
|
[handler:neutron_floatingip]
|
||||||
zone_id = <ZONE_ID>
|
zone_id = <ZONE_ID>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Reconfigure Designate:
|
Reconfigure Designate:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Verify operation
|
Verify operation
|
||||||
----------------
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
List available networks:
|
List available networks:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack network list
|
openstack network list
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Associate a domain to a network:
|
Associate a domain to a network:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Start an instance:
|
Start an instance:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack server create \
|
openstack server create \
|
||||||
--image cirros \
|
--image cirros \
|
||||||
--flavor m1.tiny \
|
--flavor m1.tiny \
|
||||||
--key-name mykey \
|
--key-name mykey \
|
||||||
--nic net-id=${NETWORK_ID} \
|
--nic net-id=${NETWORK_ID} \
|
||||||
my-vm
|
my-vm
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Check DNS records in Designate:
|
Check DNS records in Designate:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack recordset list sample.openstack.org.
|
openstack recordset list sample.openstack.org.
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
|
||||||
| id | name | type | records | status | action |
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
| id | name | type | records | status | action |
|
||||||
| 5aec6f5b-2121-4a2e-90d7-9e4509f79506 | sample.openstack.org. | SOA | sample.openstack.org. | ACTIVE | NONE |
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
| | | | admin.sample.openstack.org. 1485266928 3514 | | |
|
| 5aec6f5b-2121-4a2e-90d7-9e4509f79506 | sample.openstack.org. | SOA | sample.openstack.org. | ACTIVE | NONE |
|
||||||
| | | | 600 86400 3600 | | |
|
| | | | admin.sample.openstack.org. 1485266928 3514 | | |
|
||||||
| 578dc94a-df74-4086-a352-a3b2db9233ae | sample.openstack.org. | NS | sample.openstack.org. | ACTIVE | NONE |
|
| | | | 600 86400 3600 | | |
|
||||||
| de9ff01e-e9ef-4a0f-88ed-6ec5ecabd315 | 192-168-190-232.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
| 578dc94a-df74-4086-a352-a3b2db9233ae | sample.openstack.org. | NS | sample.openstack.org. | ACTIVE | NONE |
|
||||||
| f67645ee-829c-4154-a988-75341050a8d6 | my-vm.None.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
| de9ff01e-e9ef-4a0f-88ed-6ec5ecabd315 | 192-168-190-232.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||||
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
| f67645ee-829c-4154-a988-75341050a8d6 | my-vm.None.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||||
|
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Query instance DNS information to Designate ``dns_interface`` IP address:
|
Query instance DNS information to Designate ``dns_interface`` IP address:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
||||||
192.168.190.232
|
192.168.190.232
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information about how Designate works, see
|
For more information about how Designate works, see
|
||||||
`Designate, a DNSaaS component for OpenStack
|
`Designate, a DNSaaS component for OpenStack
|
||||||
|
Loading…
Reference in New Issue
Block a user