import installation guide pages from openstack-manuals

Change-Id: Id1ee4d52174b4a5e66c7d306e28b9c12a06d00e2
This commit is contained in:
Alexandra Settle 2017-06-26 11:56:25 +01:00
parent 62834d9094
commit 88be92ee71
15 changed files with 2200 additions and 0 deletions

View File

@ -34,6 +34,14 @@ be found on the `OpenStack wiki`_. Cloud administrators, refer to `docs.openstac
.. _`docs.openstack.org`: http://docs.openstack.org .. _`docs.openstack.org`: http://docs.openstack.org
Installing Cinder
=================
.. toctree::
:maxdepth: 2
install/index
Developer Docs Developer Docs
============== ==============

View File

@ -0,0 +1,60 @@
:orphan:
Install and configure the backup service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optionally, install and configure the backup service. For simplicity,
this configuration uses the Block Storage node and the Object Storage
(swift) driver, thus depending on the
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
.. note::
You must :ref:`install and configure a storage node <cinder-storage>` prior
to installing and configuring the backup service.
Install and configure components
--------------------------------
.. note::
Perform these steps on the Block Storage node.
#. Install the packages:
.. code-block:: console
# zypper install openstack-cinder-backup
#. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
#. In the ``[DEFAULT]`` section, configure backup options:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
URL can be found by showing the object-store API endpoints:
.. code-block:: console
$ openstack catalog show object-store
Finalize installation
---------------------
Start the Block Storage backup service and configure it to
start when the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-backup.service
# systemctl start openstack-cinder-backup.service

View File

@ -0,0 +1,61 @@
:orphan:
Install and configure the backup service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optionally, install and configure the backup service. For simplicity,
this configuration uses the Block Storage node and the Object Storage
(swift) driver, thus depending on the
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
.. note::
You must :ref:`install and configure a storage node <cinder-storage>` prior
to installing and configuring the backup service.
Install and configure components
--------------------------------
.. note::
Perform these steps on the Block Storage node.
#. Install the packages:
.. code-block:: console
# yum install openstack-cinder
#. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
#. In the ``[DEFAULT]`` section, configure backup options:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
URL can be found by showing the object-store API endpoints:
.. code-block:: console
$ openstack catalog show object-store
Finalize installation
---------------------
Start the Block Storage backup service and configure it to
start when the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-backup.service
# systemctl start openstack-cinder-backup.service

View File

@ -0,0 +1,56 @@
:orphan:
Install and configure the backup service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optionally, install and configure the backup service. For simplicity,
this configuration uses the Block Storage node and the Object Storage
(swift) driver, thus depending on the
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
.. note::
You must :ref:`install and configure a storage node <cinder-storage>` prior
to installing and configuring the backup service.
Install and configure components
--------------------------------
.. note::
Perform these steps on the Block Storage node.
#. Install the packages:
.. code-block:: console
# apt install cinder-backup
2. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[DEFAULT]`` section, configure backup options:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
URL can be found by showing the object-store API endpoints:
.. code-block:: console
$ openstack catalog show object-store
Finalize installation
---------------------
Restart the Block Storage backup service:
.. code-block:: console
# service cinder-backup restart

View File

@ -0,0 +1,344 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This
service requires at least one additional storage node that provides
volumes to instances.
Prerequisites
-------------
Before you install and configure the Block Storage service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
#. Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
#. Create the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE cinder;
#. Grant proper access to the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
Replace ``CINDER_DBPASS`` with a suitable password.
#. Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only
CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
#. Create a ``cinder`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the ``admin`` role to the ``cinder`` user:
.. code-block:: console
$ openstack role add --project service --user cinder admin
.. note::
This command provides no output.
#. Create the ``cinderv2`` and ``cinderv3`` service entities:
.. code-block:: console
$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
.. code-block:: console
$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
.. note::
The Block Storage services require two service entities.
#. Create the Block Storage service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
.. note::
The Block Storage services require endpoints for each service
entity.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# zypper install openstack-cinder-api openstack-cinder-scheduler
#. Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
#. In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
Replace ``CINDER_DBPASS`` with the password you chose for the
Block Storage database.
#. In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
Replace ``CINDER_PASS`` with the password you chose for
the ``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
#. In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
Configure Compute to use Block Storage
--------------------------------------
#. Edit the ``/etc/nova/nova.conf`` file and add the following
to it:
.. path /etc/nova/nova.conf
.. code-block:: ini
[cinder]
os_region_name = RegionOne
Finalize installation
---------------------
#. Restart the Compute API service:
.. code-block:: console
# systemctl restart openstack-nova-api.service
#. Start the Block Storage services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

View File

@ -0,0 +1,354 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This
service requires at least one additional storage node that provides
volumes to instances.
Prerequisites
-------------
Before you install and configure the Block Storage service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
#. Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
$ mysql -u root -p
#. Create the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE cinder;
#. Grant proper access to the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
Replace ``CINDER_DBPASS`` with a suitable password.
#. Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only
CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
#. Create a ``cinder`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the ``admin`` role to the ``cinder`` user:
.. code-block:: console
$ openstack role add --project service --user cinder admin
.. note::
This command provides no output.
#. Create the ``cinderv2`` and ``cinderv3`` service entities:
.. code-block:: console
$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
.. code-block:: console
$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
.. note::
The Block Storage services require two service entities.
#. Create the Block Storage service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
.. note::
The Block Storage services require endpoints for each service
entity.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# yum install openstack-cinder
#. Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
#. In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
Replace ``CINDER_DBPASS`` with the password you chose for the
Block Storage database.
#. In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
Replace ``CINDER_PASS`` with the password you chose for
the ``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
#. In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
#. Populate the Block Storage database:
.. code-block:: console
# su -s /bin/sh -c "cinder-manage db sync" cinder
.. note::
Ignore any deprecation messages in this output.
Configure Compute to use Block Storage
--------------------------------------
#. Edit the ``/etc/nova/nova.conf`` file and add the following
to it:
.. path /etc/nova/nova.conf
.. code-block:: ini
[cinder]
os_region_name = RegionOne
Finalize installation
---------------------
#. Restart the Compute API service:
.. code-block:: console
# systemctl restart openstack-nova-api.service
#. Start the Block Storage services and configure them to start when
the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

View File

@ -0,0 +1,353 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This
service requires at least one additional storage node that provides
volumes to instances.
Prerequisites
-------------
Before you install and configure the Block Storage service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
#. Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
# mysql
#. Create the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE cinder;
#. Grant proper access to the ``cinder`` database:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
Replace ``CINDER_DBPASS`` with a suitable password.
#. Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only
CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
#. Create a ``cinder`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the ``admin`` role to the ``cinder`` user:
.. code-block:: console
$ openstack role add --project service --user cinder admin
.. note::
This command provides no output.
#. Create the ``cinderv2`` and ``cinderv3`` service entities:
.. code-block:: console
$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
.. code-block:: console
$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+
.. note::
The Block Storage services require two service entities.
#. Create the Block Storage service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
.. code-block:: console
$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
.. note::
The Block Storage services require endpoints for each service
entity.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt install cinder-api cinder-scheduler
#. Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
#. In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
Replace ``CINDER_DBPASS`` with the password you chose for the
Block Storage database.
#. In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
#. In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
Replace ``CINDER_PASS`` with the password you chose for
the ``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
#. In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
#. In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
#. Populate the Block Storage database:
.. code-block:: console
# su -s /bin/sh -c "cinder-manage db sync" cinder
.. note::
Ignore any deprecation messages in this output.
Configure Compute to use Block Storage
--------------------------------------
#. Edit the ``/etc/nova/nova.conf`` file and add the following
to it:
.. path /etc/nova/nova.conf
.. code-block:: ini
[cinder]
os_region_name = RegionOne
Finalize installation
---------------------
#. Restart the Compute API service:
.. code-block:: console
# service nova-api restart
#. Restart the Block Storage services:
.. code-block:: console
# service cinder-scheduler restart
# service apache2 restart

View File

@ -0,0 +1,273 @@
Install and configure a storage node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you install and configure the Block Storage service on the
storage node, you must prepare the storage device.
.. note::
Perform these steps on the storage node.
#. Install the supporting utility packages.
#. Install the LVM packages:
.. code-block:: console
# zypper install lvm2
#. (Optional) If you intend to use non-raw image types such as QCOW2
and VMDK, install the QEMU package:
.. code-block:: console
# zypper install qemu
.. note::
Some distributions include LVM by default.
#. Create the LVM physical volume ``/dev/sdb``:
.. code-block:: console
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
#. Create the LVM volume group ``cinder-volumes``:
.. code-block:: console
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
The Block Storage service creates logical volumes in this volume group.
#. Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
``/dev`` directory for block storage devices that
contain volumes. If projects use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and project volumes. You must reconfigure LVM to scan only the devices
that contain the ``cinder-volumes`` volume group. Edit the
``/etc/lvm/lvm.conf`` file and complete the following actions:
* In the ``devices`` section, add a filter that accepts the
``/dev/sdb`` device and rejects all other devices:
.. path /etc/lvm/lvm.conf
.. code-block:: none
devices {
...
filter = [ "a/sdb/", "r/.*/"]
.. end
Each item in the filter array begins with ``a`` for **accept** or
``r`` for **reject** and includes a regular expression for the
device name. The array must end with ``r/.*/`` to reject any
remaining devices. You can use the :command:`vgs -vvvv` command
to test filters.
.. warning::
If your storage nodes use LVM on the operating system disk, you
must also add the associated device to the filter. For example,
if the ``/dev/sda`` device contains the operating system:
.. ignore_path /etc/lvm/lvm.conf
.. code-block:: ini
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
.. end
Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
``/etc/lvm/lvm.conf`` file on those nodes to include only
the operating system disk. For example, if the ``/dev/sda``
device contains the operating system:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
filter = [ "a/sda/", "r/.*/"]
.. end
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# zypper install openstack-cinder-volume tgt
#. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
.. end
Replace ``CINDER_DBPASS`` with the password you chose for
the Block Storage database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for
the ``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
.. end
Replace ``CINDER_PASS`` with the password you chose for the
``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your storage node,
typically 10.0.0.41 for the first node in the
:ref:`example architecture <overview-example-architectures>`.
* In the ``[lvm]`` section, configure the LVM back end with the
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
and appropriate iSCSI service:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[lvm]
# ...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
.. end
* In the ``[DEFAULT]`` section, enable the LVM back end:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_backends = lvm
.. end
.. note::
Back-end names are arbitrary. As an example, this guide
uses the name of the driver as the name of the back end.
* In the ``[DEFAULT]`` section, configure the location of the
Image service API:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
glance_api_servers = http://controller:9292
.. end
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
.. end
3. Create the ``/etc/tgt/conf.d/cinder.conf`` file
with the following data:
.. code-block:: shell
include /var/lib/cinder/volumes/*
.. end
Finalize installation
---------------------
#. Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-volume.service tgtd.service
# systemctl start openstack-cinder-volume.service tgtd.service

View File

@ -0,0 +1,288 @@
Install and configure a storage node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you install and configure the Block Storage service on the
storage node, you must prepare the storage device.
.. note::
Perform these steps on the storage node.
#. Install the supporting utility packages:
* Install the LVM packages:
.. code-block:: console
# yum install lvm2
.. end
* Start the LVM metadata service and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
.. end
.. note::
Some distributions include LVM by default.
#. Create the LVM physical volume ``/dev/sdb``:
.. code-block:: console
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
.. end
#. Create the LVM volume group ``cinder-volumes``:
.. code-block:: console
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
.. end
The Block Storage service creates logical volumes in this volume group.
#. Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
``/dev`` directory for block storage devices that
contain volumes. If projects use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and project volumes. You must reconfigure LVM to scan only the devices
that contain the ``cinder-volumes`` volume group. Edit the
``/etc/lvm/lvm.conf`` file and complete the following actions:
* In the ``devices`` section, add a filter that accepts the
``/dev/sdb`` device and rejects all other devices:
.. path /etc/lvm/lvm.conf
.. code-block:: none
devices {
...
filter = [ "a/sdb/", "r/.*/"]
.. end
Each item in the filter array begins with ``a`` for **accept** or
``r`` for **reject** and includes a regular expression for the
device name. The array must end with ``r/.*/`` to reject any
remaining devices. You can use the :command:`vgs -vvvv` command
to test filters.
.. warning::
If your storage nodes use LVM on the operating system disk, you
must also add the associated device to the filter. For example,
if the ``/dev/sda`` device contains the operating system:
.. ignore_path /etc/lvm/lvm.conf
.. code-block:: ini
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
.. end
Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
``/etc/lvm/lvm.conf`` file on those nodes to include only
the operating system disk. For example, if the ``/dev/sda``
device contains the operating system:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
filter = [ "a/sda/", "r/.*/"]
.. end
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# yum install openstack-cinder targetcli python-keystone
.. end
2. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
.. end
Replace ``CINDER_DBPASS`` with the password you chose for
the Block Storage database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for
the ``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
.. end
Replace ``CINDER_PASS`` with the password you chose for the
``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your storage node,
typically 10.0.0.41 for the first node in the
:ref:`example architecture <overview-example-architectures>`.
* In the ``[lvm]`` section, configure the LVM back end with the
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
and appropriate iSCSI service. If the ``[lvm]`` section does not exist,
create it:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
.. end
* In the ``[DEFAULT]`` section, enable the LVM back end:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_backends = lvm
.. end
.. note::
Back-end names are arbitrary. As an example, this guide
uses the name of the driver as the name of the back end.
* In the ``[DEFAULT]`` section, configure the location of the
Image service API:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
glance_api_servers = http://controller:9292
.. end
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
.. end
Finalize installation
---------------------
* Start the Block Storage volume service including its dependencies
and configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
.. end

View File

@ -0,0 +1,275 @@
Install and configure a storage node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Prerequisites
-------------
Before you install and configure the Block Storage service on the
storage node, you must prepare the storage device.
.. note::
Perform these steps on the storage node.
#. Install the supporting utility packages:
.. code-block:: console
# apt install lvm2
.. end
.. note::
Some distributions include LVM by default.
#. Create the LVM physical volume ``/dev/sdb``:
.. code-block:: console
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
.. end
#. Create the LVM volume group ``cinder-volumes``:
.. code-block:: console
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
.. end
The Block Storage service creates logical volumes in this volume group.
#. Only instances can access Block Storage volumes. However, the
underlying operating system manages the devices associated with
the volumes. By default, the LVM volume scanning tool scans the
``/dev`` directory for block storage devices that
contain volumes. If projects use LVM on their volumes, the scanning
tool detects these volumes and attempts to cache them which can cause
a variety of problems with both the underlying operating system
and project volumes. You must reconfigure LVM to scan only the devices
that contain the ``cinder-volumes`` volume group. Edit the
``/etc/lvm/lvm.conf`` file and complete the following actions:
* In the ``devices`` section, add a filter that accepts the
``/dev/sdb`` device and rejects all other devices:
.. path /etc/lvm/lvm.conf
.. code-block:: none
devices {
...
filter = [ "a/sdb/", "r/.*/"]
.. end
Each item in the filter array begins with ``a`` for **accept** or
``r`` for **reject** and includes a regular expression for the
device name. The array must end with ``r/.*/`` to reject any
remaining devices. You can use the :command:`vgs -vvvv` command
to test filters.
.. warning::
If your storage nodes use LVM on the operating system disk, you
must also add the associated device to the filter. For example,
if the ``/dev/sda`` device contains the operating system:
.. ignore_path /etc/lvm/lvm.conf
.. code-block:: ini
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
.. end
Similarly, if your compute nodes use LVM on the operating
system disk, you must also modify the filter in the
``/etc/lvm/lvm.conf`` file on those nodes to include only
the operating system disk. For example, if the ``/dev/sda``
device contains the operating system:
.. path /etc/openstack-dashboard/local_settings.py
.. code-block:: ini
filter = [ "a/sda/", "r/.*/"]
.. end
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt install cinder-volume
.. end
2. Edit the ``/etc/cinder/cinder.conf`` file
and complete the following actions:
* In the ``[database]`` section, configure database access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[database]
# ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
.. end
Replace ``CINDER_DBPASS`` with the password you chose for
the Block Storage database.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
message queue access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
.. end
Replace ``RABBIT_PASS`` with the password you chose for
the ``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
configure Identity service access:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
.. end
Replace ``CINDER_PASS`` with the password you chose for the
``cinder`` user in the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
.. end
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your storage node,
typically 10.0.0.41 for the first node in the
:ref:`example architecture <overview-example-architectures>`.
* In the ``[lvm]`` section, configure the LVM back end with the
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
and appropriate iSCSI service:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[lvm]
# ...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
.. end
* In the ``[DEFAULT]`` section, enable the LVM back end:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_backends = lvm
.. end
.. note::
Back-end names are arbitrary. As an example, this guide
uses the name of the driver as the name of the back end.
* In the ``[DEFAULT]`` section, configure the location of the
Image service API:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[DEFAULT]
# ...
glance_api_servers = http://controller:9292
.. end
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/cinder/cinder.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
.. end
Finalize installation
---------------------
#. Restart the Block Storage volume service including its dependencies:
.. code-block:: console
# service tgt restart
# service cinder-volume restart
.. end

View File

@ -0,0 +1,35 @@
.. _cinder-verify:
Verify Cinder operation
~~~~~~~~~~~~~~~~~~~~~~~
Verify operation of the Block Storage service.
.. note::
Perform these commands on the controller node.
#. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
.. end
#. List service components to verify successful launch of each process:
.. code-block:: console
$ openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 |
| cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 |
+------------------+------------+------+---------+-------+----------------------------+
.. end

View File

@ -0,0 +1,23 @@
===================================================================
Cinder Installation Tutorial for openSUSE and SUSE Linux Enterprise
===================================================================
This section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device.
The instructions use ``/dev/sdb``, but you can substitute a different
value for your particular node.
The service provisions logical volumes on this device using the
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
You can follow these instructions with minor modifications to horizontally
scale your environment with additional storage nodes.
.. toctree::
:maxdepth: 2
cinder-storage-install-obs.rst
cinder-controller-install-obs.rst
cinder-backup-install-obs.rst
cinder-verify.rst

View File

@ -0,0 +1,23 @@
======================================================================
Cinder Installation Tutorial for Red Hat Enterprise Linux and CentOS
======================================================================
This section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device.
The instructions use ``/dev/sdb``, but you can substitute a different
value for your particular node.
The service provisions logical volumes on this device using the
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
You can follow these instructions with minor modifications to horizontally
scale your environment with additional storage nodes.
.. toctree::
:maxdepth: 2
cinder-storage-install-rdo.rst
cinder-controller-install-rdo.rst
cinder-backup-install-rdo.rst
cinder-verify.rst

View File

@ -0,0 +1,23 @@
=======================================
Cinder Installation Tutorial for Ubuntu
=======================================
This section describes how to install and configure storage nodes
for the Block Storage service. For simplicity, this configuration
references one storage node with an empty local block storage device.
The instructions use ``/dev/sdb``, but you can substitute a different
value for your particular node.
The service provisions logical volumes on this device using the
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
You can follow these instructions with minor modifications to horizontally
scale your environment with additional storage nodes.
.. toctree::
:maxdepth: 2
cinder-storage-install-ubuntu.rst
cinder-controller-install-ubuntu.rst
cinder-backup-install-ubuntu.rst
cinder-verify.rst

View File

@ -0,0 +1,24 @@
.. _cinder:
============================
Cinder Installation Tutorial
============================
The Block Storage service (cinder) provides block storage devices
to guest instances. The method in which the storage is provisioned and
consumed is determined by the Block Storage driver, or drivers
in the case of a multi-backend configuration. There are a variety of
drivers that are available: NAS/SAN, NFS, iSCSI, Ceph, and more.
The Block Storage API and scheduler services typically run on the controller
nodes. Depending upon the drivers used, the volume service can run
on controller nodes, compute nodes, or standalone storage nodes.
For more information, see the
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/block-storage/volume-drivers.html>`_.
.. toctree::
index-obs
index-rdo
index-ubuntu