[admin-guide] Fix rst mark-ups for block-storage files

1.) Also deleted the commands that are explaining
for different linux distros.
2.) And also replaced one TODO link

Change-Id: I2557ff8bd93b21129b26a03c8c2b0b714949b9cf
This commit is contained in:
venkatamahesh 2015-12-15 02:10:14 +05:30
parent 84fce47ab4
commit e08ae456c0
19 changed files with 858 additions and 705 deletions

View File

@ -19,14 +19,16 @@ To do so, use the Block Storage API service option ``osapi_volume_workers``.
This option allows you to specify the number of API service workers
(or OS processes) to launch for the Block Storage API service.
To configure this option, open the :file:`/etc/cinder/cinder.conf`
To configure this option, open the ``/etc/cinder/cinder.conf``
configuration file and set the ``osapi_volume_workers`` configuration
key to the number of CPU cores/threads on a machine.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES
Replace ``CORES`` with the number of CPU cores/threads on a machine.

View File

@ -11,9 +11,9 @@ group operations can be performed using the Block Storage command line.
.. note::
Only Block Storage V2 API supports consistency groups. You can
specify ``--os-volume-api-version 2`` when using Block Storage
command line for consistency group operations.
Only Block Storage V2 API supports consistency groups. You can
specify :option:`--os-volume-api-version 2` when using Block Storage
command line for consistency group operations.
Before using consistency groups, make sure the Block Storage driver that
you are running has consistency group support by reading the Block
@ -23,33 +23,37 @@ driver does not support consistency groups yet because the consistency
technology is not available at the storage level.
Before using consistency groups, you must change policies for the
consistency group APIs in the :file:`/etc/cinder/policy.json` file.
consistency group APIs in the ``/etc/cinder/policy.json`` file.
By default, the consistency group APIs are disabled.
Enable them before running consistency group operations.
Here are existing policy entries for consistency groups::
Here are existing policy entries for consistency groups:
"consistencygroup:create": "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:update": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "group:nobody",
"consistencygroup:delete_cgsnapshot": "group:nobody",
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",
.. code-block:: json
Remove ``group:nobody`` to enable these APIs::
"consistencygroup:create": "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:update": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "group:nobody",
"consistencygroup:delete_cgsnapshot": "group:nobody",
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",
"consistencygroup:create": "",
"consistencygroup:delete": "",
"consistencygroup:update": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
Remove ``group:nobody`` to enable these APIs:
.. code-block:: json
"consistencygroup:create": "",
"consistencygroup:delete": "",
"consistencygroup:update": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
Restart Block Storage API service after changing policies.
@ -59,15 +63,15 @@ The following consistency group operations are supported:
.. note::
A consistency group can support more than one volume type. The
scheduler is responsible for finding a back end that can support
all given volume types.
A consistency group can support more than one volume type. The
scheduler is responsible for finding a back end that can support
all given volume types.
A consistency group can only contain volumes hosted by the same
back end.
A consistency group can only contain volumes hosted by the same
back end.
A consistency group is empty upon its creation. Volumes need to
be created and added to it later.
A consistency group is empty upon its creation. Volumes need to
be created and added to it later.
- Show a consistency group.
@ -104,8 +108,8 @@ group:
.. note::
A consistency group has to be deleted as a whole with all the
volumes.
A consistency group has to be deleted as a whole with all the
volumes.
The following operations are not allowed if a volume snapshot is in a
consistency group snapshot:
@ -114,160 +118,178 @@ consistency group snapshot:
.. note::
A consistency group snapshot has to be deleted as a whole with
all the volume snapshots.
A consistency group snapshot has to be deleted as a whole with
all the volume snapshots.
The details of consistency group operations are shown in the following.
**Create a consistency group**::
**Create a consistency group**:
cinder consisgroup-create
[--name name]
[--description description]
[--availability-zone availability-zone]
volume-types
.. code-block:: console
cinder consisgroup-create
[--name name]
[--description description]
[--availability-zone availability-zone]
volume-types
.. note::
The parameter ``volume-types`` is required. It can be a list of
names or UUIDs of volume types separated by commas without spaces in
between. For example, ``volumetype1,volumetype2,volumetype3.``.
The parameter ``volume-types`` is required. It can be a list of
names or UUIDs of volume types separated by commas without spaces in
between. For example, ``volumetype1,volumetype2,volumetype3.``.
::
.. code-block:: console
$ cinder consisgroup-create --name bronzeCG2 volume_type_1
$ cinder consisgroup-create --name bronzeCG2 volume_type_1
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| name | bronzeCG2 |
| status | creating |
+-------------------+--------------------------------------+
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| name | bronzeCG2 |
| status | creating |
+-------------------+--------------------------------------+
**Show a consistency group**::
**Show a consistency group**:
$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462
.. code-block:: console
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 2a6b2bda-1f43-42ce-9de8-249fa5cbae9a |
| name | bronzeCG2 |
| status | available |
+-------------------+--------------------------------------+
$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462
**List consistency groups**::
+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| availability_zone | nova |
| created_at | 2014-12-29T12:59:08.000000 |
| description | None |
| id | 2a6b2bda-1f43-42ce-9de8-249fa5cbae9a |
| name | bronzeCG2 |
| status | available |
+-------------------+--------------------------------------+
$ cinder consisgroup-list
**List consistency groups**:
+--------------------------------------+-----------+-----------+
| ID | Status | Name |
+--------------------------------------+-----------+-----------+
| 1de80c27-3b2f-47a6-91a7-e867cbe36462 | available | bronzeCG2 |
| 3a2b3c42-b612-479a-91eb-1ed45b7f2ad5 | error | bronzeCG |
+--------------------------------------+-----------+-----------+
.. code-block:: console
$ cinder consisgroup-list
+--------------------------------------+-----------+-----------+
| ID | Status | Name |
+--------------------------------------+-----------+-----------+
| 1de80c27-3b2f-47a6-91a7-e867cbe36462 | available | bronzeCG2 |
| 3a2b3c42-b612-479a-91eb-1ed45b7f2ad5 | error | bronzeCG |
+--------------------------------------+-----------+-----------+
**Create a volume and add it to a consistency group**:
.. note::
When creating a volume and adding it to a consistency group, a
volume type and a consistency group id must be provided. This is
because a consistency group can support more than one volume type.
When creating a volume and adding it to a consistency group, a
volume type and a consistency group id must be provided. This is
because a consistency group can support more than one volume type.
::
.. code-block:: console
$ cinder create --volume-type volume_type_1 --name cgBronzeVol\
--consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
$ cinder create --volume-type volume_type_1 --name cgBronzeVol\
--consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:16:47.000000 |
| description | None |
| encrypted | False |
| id | 5e6d1386-4592-489f-a56b-9394a81145fe |
| metadata | {} |
| name | cgBronzeVol |
| os-vol-host-attr:host | server-1@backend-1#pool-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1349b21da2a046d8aa5379f0ed447bed |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 93bdea12d3e04c4b86f9a9f172359859 |
| volume_type | volume_type_1 |
+---------------------------------------+--------------------------------------+
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:16:47.000000 |
| description | None |
| encrypted | False |
| id | 5e6d1386-4592-489f-a56b-9394a81145fe |
| metadata | {} |
| name | cgBronzeVol |
| os-vol-host-attr:host | server-1@backend-1#pool-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1349b21da2a046d8aa5379f0ed447bed |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 93bdea12d3e04c4b86f9a9f172359859 |
| volume_type | volume_type_1 |
+---------------------------------------+--------------------------------------+
**Create a snapshot for a consistency group**::
**Create a snapshot for a consistency group**:
$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462
.. code-block:: console
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:19:44.000000 |
| description | None |
| id | d4aff465-f50c-40b3-b088-83feb9b349e9 |
| name | None |
| status | creating |
+---------------------+-------------------------------------+
$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Show a snapshot of a consistency group**::
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
| created_at | 2014-12-29T13:19:44.000000 |
| description | None |
| id | d4aff465-f50c-40b3-b088-83feb9b349e9 |
| name | None |
| status | creating |
+---------------------+-------------------------------------+
$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9
**Show a snapshot of a consistency group**:
**List consistency group snapshots**::
.. code-block:: console
$ cinder cgsnapshot-list
$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9
+--------------------------------------+--------+----------+
| ID | Status | Name |
+--------------------------------------+--------+----------+
| 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 | available | None |
| aa129f4d-d37c-4b97-9e2d-7efffda29de0 | available | None |
| bb5b5d82-f380-4a32-b469-3ba2e299712c | available | None |
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available | None |
+--------------------------------------+--------+----------+
**List consistency group snapshots**:
**Delete a snapshot of a consistency group**::
.. code-block:: console
$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9
$ cinder cgsnapshot-list
+--------------------------------------+--------+----------+
| ID | Status | Name |
+--------------------------------------+--------+----------+
| 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 | available | None |
| aa129f4d-d37c-4b97-9e2d-7efffda29de0 | available | None |
| bb5b5d82-f380-4a32-b469-3ba2e299712c | available | None |
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available | None |
+--------------------------------------+--------+----------+
**Delete a snapshot of a consistency group**:
.. code-block:: console
$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9
**Delete a consistency group**:
.. note::
The force flag is needed when there are volumes in the consistency
group::
The force flag is needed when there are volumes in the consistency
group:
$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462
.. code-block:: console
**Modify a consistency group**::
$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462
cinder consisgroup-update
[--name NAME]
[--description DESCRIPTION]
[--add-volumes UUID1,UUID2,......]
[--remove-volumes UUID3,UUID4,......]
CG
**Modify a consistency group**:
.. code-block:: console
cinder consisgroup-update
[--name NAME]
[--description DESCRIPTION]
[--add-volumes UUID1,UUID2,......]
[--remove-volumes UUID3,UUID4,......]
CG
The parameter ``CG`` is required. It can be a name or UUID of a consistency
group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added
@ -275,36 +297,45 @@ to the consistency group, separated by commas. Default is None.
UUID3,UUId4,...... are UUIDs of one or more volumes to be removed from
the consistency group, separated by commas. Default is None.
::
.. code-block:: console
$ cinder consisgroup-update --name 'new name' --description 'new descripti\
on' --add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e3\
2-929c-618d81f52cf3 --remove-volumes 8c0f6ae4-efb1-458f-a8fc-9da2afcc5fb\
1,a245423f-bb99-4f94-8c8c-02806f9246d8 1de80c27-3b2f-47a6-91a7-e867cbe36462
$ cinder consisgroup-update --name 'new name' --description 'new descripti\
on' --add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e3\
2-929c-618d81f52cf3 --remove-volumes 8c0f6ae4-efb1-458f-a8fc-9da2afcc5fb\
1,a245423f-bb99-4f94-8c8c-02806f9246d8 1de80c27-3b2f-47a6-91a7-e867cbe36462
**Create a consistency group from the snapshot of another consistency
group**::
group**:
$ cinder consisgroup-create-from-src
[--cgsnapshot CGSNAPSHOT]
[--name NAME]
[--description DESCRIPTION]
.. code-block:: console
$ cinder consisgroup-create-from-src
[--cgsnapshot CGSNAPSHOT]
[--name NAME]
[--description DESCRIPTION]
The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a
consistency group::
consistency group:
$ cinder consisgroup-create-from-src --cgsnapshot 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cg from cgsnapshot'
.. code-block:: console
**Create a consistency group from a source consistency group**::
$ cinder consisgroup-create-from-src --cgsnapshot 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cg from cgsnapshot'
$ cinder consisgroup-create-from-src
[--source-cg SOURCECG]
[--name NAME]
[--description DESCRIPTION]
**Create a consistency group from a source consistency group**:
.. code-block:: console
$ cinder consisgroup-create-from-src
[--source-cg SOURCECG]
[--name NAME]
[--description DESCRIPTION]
The parameter ``SOURCECG`` is a name or UUID of a source
consistency group::
consistency group:
.. code-block:: console
$ cinder consisgroup-create-from-src --source-cg 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cloned cg'
$ cinder consisgroup-create-from-src --source-cg 6d9dfb7d-079a-471e-b75a-\
6e9185ba0c38 --name 'new cg' --description 'new cloned cg'

View File

@ -1,3 +1,5 @@
.. _filter_weigh_scheduler:
==========================================================
Configure and use driver filter and weighing for scheduler
==========================================================
@ -30,11 +32,11 @@ Enable driver filter and weighing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable the driver filter, set the ``scheduler_default_filters`` option in
the :file:`cinder.conf` file to ``DriverFilter`` or add it to the list if
the ``cinder.conf`` file to ``DriverFilter`` or add it to the list if
other filters are already present.
To enable the goodness filter as a weigher, set the
``scheduler_default_weighers`` option in the :file:`cinder.conf` file to
``scheduler_default_weighers`` option in the ``cinder.conf`` file to
``GoodnessWeigher`` or add it to the list if other weighers are already
present.
@ -45,22 +47,24 @@ choose an ideal back end.
.. important::
The support for the ``DriverFilter`` and ``GoodnessWeigher`` is
optional for back ends. If you are using a back end that does not
support the filter and weigher functionality you may not get the
full benefit.
The support for the ``DriverFilter`` and ``GoodnessWeigher`` is
optional for back ends. If you are using a back end that does not
support the filter and weigher functionality you may not get the
full benefit.
Example :file:`cinder.conf` configuration file::
Example ``cinder.conf`` configuration file:
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
.. code-block:: ini
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
.. note::
It is useful to use the other filters and weighers available in
OpenStack in combination with these custom ones. For example, the
``CapacityFilter`` and ``CapacityWeigher`` can be combined with
these.
It is useful to use the other filters and weighers available in
OpenStack in combination with these custom ones. For example, the
``CapacityFilter`` and ``CapacityWeigher`` can be combined with
these.
Defining your own filter and goodness functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -82,10 +86,10 @@ highest).
.. important::
Default values for the filter and goodness functions will be used
for each back end if you do not define them yourself. If complete
control is desired then a filter and goodness function should be
defined for each of the back ends in the :file:`cinder.conf` file.
Default values for the filter and goodness functions will be used
for each back end if you do not define them yourself. If complete
control is desired then a filter and goodness function should be
defined for each of the back ends in the ``cinder.conf`` file.
Supported operations in filter and goodness functions
@ -112,8 +116,8 @@ and goodness functions created by you:
.. caution::
Syntax errors in filter or goodness strings defined by you will
cause errors to be thrown at volume request time.
Syntax errors in filter or goodness strings defined by you will
cause errors to be thrown at volume request time.
Available properties when creating custom functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -223,16 +227,20 @@ The property most used from here will most likely be the ``size`` sub-property.
Extra specs for the requested volume type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View the available properties for volume types by running::
View the available properties for volume types by running:
$ cinder extra-specs-list
.. code-block:: console
$ cinder extra-specs-list
Current QoS specs for the requested volume type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View the available properties for volume types by running::
View the available properties for volume types by running:
$ cinder qos-list
.. code-block:: console
$ cinder qos-list
In order to access these properties in a custom string use the following
format:
@ -245,22 +253,24 @@ Driver filter and weigher usage examples
Below are examples for using the filter and weigher separately,
together, and using driver-specific properties.
Example :file:`cinder.conf` file configuration for customizing the filter
function::
Example ``cinder.conf`` file configuration for customizing the filter
function:
[default]
scheduler_default_filters = DriverFilter
enabled_backends = lvm-1, lvm-2
.. code-block:: ini
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size < 10"
[default]
scheduler_default_filters = DriverFilter
enabled_backends = lvm-1, lvm-2
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size >= 10"
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size < 10"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size >= 10"
The above example will filter volumes to different back ends depending
on the size of the requested volume. Default OpenStack Block Storage
@ -268,22 +278,24 @@ scheduler weighing is done. Volumes with a size less than 10 GB are sent
to lvm-1 and volumes with a size greater than or equal to 10 GB are sent
to lvm-2.
Example :file:`cinder.conf` file configuration for customizing the goodness
function::
Example ``cinder.conf`` file configuration for customizing the goodness
function:
[default]
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
.. code-block:: ini
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size < 5) ? 100 : 50"
[default]
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size >= 5) ? 100 : 25"
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size < 5) ? 100 : 50"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size >= 5) ? 100 : 25"
The above example will determine the goodness rating of a back end based
off of the requested volume's size. Default OpenStack Block Storage
@ -293,57 +305,61 @@ volume is of size 10 GB then lvm-1 is rated as 50 and lvm-2 is rated as
100. In this case lvm-2 wins. If a requested volume is of size 3 GB then
lvm-1 is rated 100 and lvm-2 is rated 25. In this case lvm-1 would win.
Example :file:`cinder.conf` file configuration for customizing both the
filter and goodness functions::
Example ``cinder.conf`` file configuration for customizing both the
filter and goodness functions:
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
.. code-block:: ini
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb < 500"
goodness_function = "(volume.size < 25) ? 100 : 50"
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb >= 500"
goodness_function = "(volume.size >= 25) ? 100 : 75"
[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb < 500"
goodness_function = "(volume.size < 25) ? 100 : 50"
[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb >= 500"
goodness_function = "(volume.size >= 25) ? 100 : 75"
The above example combines the techniques from the first two examples.
The best back end is now decided based off of the total capacity of the
back end and the requested volume's size.
Example :file:`cinder.conf` file configuration for accessing driver specific
properties::
Example ``cinder.conf`` file configuration for accessing driver specific
properties:
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1,lvm-2,lvm-3
.. code-block:: ini
[lvm-1]
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1
filter_function = "volume.size < 5"
goodness_function = "(capabilities.total_volumes < 3) ? 100 : 50"
[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1,lvm-2,lvm-3
[lvm-2]
volume_group = stack-volumes-lvmdriver-2
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-2
filter_function = "volumes.size < 5"
goodness_function = "(capabilities.total_volumes < 8) ? 100 : 50"
[lvm-1]
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1
filter_function = "volume.size < 5"
goodness_function = "(capabilities.total_volumes < 3) ? 100 : 50"
[lvm-3]
volume_group = stack-volumes-lvmdriver-3
volume_driver = cinder.volume.drivers.LVMVolumeDriver
volume_backend_name = lvmdriver-3
goodness_function = "55"
[lvm-2]
volume_group = stack-volumes-lvmdriver-2
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-2
filter_function = "volumes.size < 5"
goodness_function = "(capabilities.total_volumes < 8) ? 100 : 50"
[lvm-3]
volume_group = stack-volumes-lvmdriver-3
volume_driver = cinder.volume.drivers.LVMVolumeDriver
volume_backend_name = lvmdriver-3
goodness_function = "55"
The above is an example of how back-end specific properties can be used
in the filter and goodness functions. In this example the LVM driver's

View File

@ -4,7 +4,7 @@ Use LIO iSCSI support
The default mode for the ``iscsi_helper`` tool is ``tgtadm``.
To use LIO iSCSI, install the ``python-rtslib`` package, and set
``iscsi_helper=lioadm`` in the :file:`cinder.conf` file.
``iscsi_helper=lioadm`` in the ``cinder.conf`` file.
Once configured, you can use the :command:`cinder-rtstool` command to
manage the volumes. This command enables you to create, delete, and

View File

@ -3,7 +3,7 @@ Manage volumes
==============
The default OpenStack Block Storage service implementation is an
iSCSI solution that uses Logical Volume Manager (LVM) for Linux.
iSCSI solution that uses :term:`Logical Volume Manager (LVM)` for Linux.
.. note::
@ -23,7 +23,7 @@ to a server instance.
**To create and attach a volume to an instance**
#. Configure the OpenStack Compute and the OpenStack Block Storage
services through the :file:`cinder.conf` file.
services through the ``cinder.conf`` file.
#. Use the :command:`cinder create` command to create a volume. This
command creates an LV into the volume group (VG) ``cinder-volumes``.
#. Use the nova :command:`volume-attach` command to attach the volume
@ -31,10 +31,10 @@ to a server instance.
exposed to the compute node.
* The compute node, which runs the instance, now has an active
iSCSI session and new local storage (usually a :file:`/dev/sdX`
iSCSI session and new local storage (usually a ``/dev/sdX``
disk).
* Libvirt uses that local storage as storage for the instance. The
instance gets a new disk (usually a :file:`/dev/vdX` disk).
instance gets a new disk (usually a ``/dev/vdX`` disk).
For this particular walk through, one cloud controller runs
``nova-api``, ``nova-scheduler``, ``nova-objectstore``,

View File

@ -52,16 +52,18 @@ You can apply this process to volumes of any size.
# lvdisplay
* Create the snapshot; you can do this while the volume is attached
to an instance::
to an instance:
# lvcreate --size 10G --snapshot --name volume-00000001-snapshot \
/dev/cinder-volumes/volume-00000001
.. code-block:: console
# lvcreate --size 10G --snapshot --name volume-00000001-snapshot \
/dev/cinder-volumes/volume-00000001
Use the :option:`--snapshot` configuration option to tell LVM that you want a
snapshot of an already existing volume. The command includes the size
of the space reserved for the snapshot volume, the name of the snapshot,
and the path of an already existing volume. Generally, this path
is :file:`/dev/cinder-volumes/VOLUME_NAME`.
is ``/dev/cinder-volumes/VOLUME_NAME``.
The size does not have to be the same as the volume of the snapshot.
The :option:`--size` parameter defines the space that LVM reserves
@ -69,44 +71,46 @@ You can apply this process to volumes of any size.
as that of the original volume, even if the whole space is not
currently used by the snapshot.
* Run the :command:`lvdisplay` command again to verify the snapshot::
* Run the :command:`lvdisplay` command again to verify the snapshot:
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-00000001
VG Name cinder-volumes
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
LV Write Access read/write
LV snapshot status source of
/dev/cinder-volumes/volume-00000026-snap [active]
LV Status available
# open 1
LV Size 15,00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:13
.. code-block:: console
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-00000001-snap
VG Name cinder-volumes
LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
LV Write Access read/write
LV snapshot status active destination for /dev/cinder-volumes/volume-00000026
LV Status available
# open 0
LV Size 15,00 GiB
Current LE 3840
COW-table size 10,00 GiB
COW-table LE 2560
Allocated to snapshot 0,00%
Snapshot chunk size 4,00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:14
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-00000001
VG Name cinder-volumes
LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
LV Write Access read/write
LV snapshot status source of
/dev/cinder-volumes/volume-00000026-snap [active]
LV Status available
# open 1
LV Size 15,00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:13
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-00000001-snap
VG Name cinder-volumes
LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
LV Write Access read/write
LV snapshot status active destination for /dev/cinder-volumes/volume-00000026
LV Status available
# open 0
LV Size 15,00 GiB
Current LE 3840
COW-table size 10,00 GiB
COW-table LE 2560
Allocated to snapshot 0,00%
Snapshot chunk size 4,00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:14
#. Partition table discovery
@ -131,9 +135,11 @@ You can apply this process to volumes of any size.
If the tools successfully find and map the partition table,
no errors are returned.
* To check the partition table map, run this command::
* To check the partition table map, run this command:
$ ls /dev/mapper/nova*
.. code-block:: console
$ ls /dev/mapper/nova*
You can see the ``cinder--volumes-volume--00000001--snapshot1``
partition.
@ -160,12 +166,14 @@ You can apply this process to volumes of any size.
#. Use the :command:`tar` command to create archives
Create a backup of the volume::
Create a backup of the volume:
$ tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf \
volume-00000001.tar.gz -C /mnt/ /backup/destination
.. code-block:: console
This command creates a :file:`tar.gz` file that contains the data,
$ tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf \
volume-00000001.tar.gz -C /mnt/ /backup/destination
This command creates a ``tar.gz`` file that contains the data,
*and data only*. This ensures that you do not waste space by backing
up empty sectors.
@ -178,9 +186,11 @@ You can apply this process to volumes of any size.
different, the file is corrupted.
Run this command to run a checksum for your file and save the result
to a file::
to a file:
$ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
.. code-block:: console
$ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
.. note::
@ -196,17 +206,23 @@ You can apply this process to volumes of any size.
Now that you have an efficient and consistent backup, use this command
to clean up the file system:
* Unmount the volume::
* Unmount the volume.
$ umount /mnt
.. code-block:: console
* Delete the partition table::
$ umount /mnt
$ kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot
* Delete the partition table.
* Remove the snapshot::
.. code-block:: console
$ lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
$ kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot
* Remove the snapshot.
.. code-block:: console
$ lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
Repeat these steps for all your volumes.
@ -221,21 +237,23 @@ You can apply this process to volumes of any size.
Launch this script from the server that runs the Block Storage service.
This example shows a mail report::
This example shows a mail report:
Backup Start Time - 07/10 at 01:00:01
Current retention - 7 days
.. code-block:: console
The backup volume is mounted. Proceed...
Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz
/BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G
Backup Start Time - 07/10 at 01:00:01
Current retention - 7 days
The backup volume is mounted. Proceed...
Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz
/BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G
---------------------------------------
Total backups size - 267G - Used space : 35%
Total execution time - 1 h 75 m and 35 seconds
The backup volume is mounted. Proceed...
Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz
/BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G
The backup volume is mounted. Proceed...
Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz
/BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G
---------------------------------------
Total backups size - 267G - Used space : 35%
Total execution time - 1 h 75 m and 35 seconds
The script also enables you to SSH to your instances and run a
:command:`mysqldump` command into them. To make this work, enable

View File

@ -5,21 +5,22 @@
Get capabilities
================
When an administrator configures *volume type* and *extra specs* of storage
When an administrator configures ``volume type`` and ``extra specs`` of storage
on the back end, the administrator has to read the right documentation that
corresponds to the version of the storage back end. Deep knowledge of
storage is also required.
OpenStack Block Storage enables administrators to configure *volume type*
and *extra specs* without specific knowledge of the storage back end.
OpenStack Block Storage enables administrators to configure ``volume type``
and ``extra specs`` without specific knowledge of the storage back end.
.. note::
* *Volume Type:* A group of volume policies.
* *Extra Specs:* The definition of a volume type. This is a group of
policies. For example, provision type, QOS that will be used to
define a volume at creation time.
* *Capabilities:* What the current deployed back end in Cinder is able
to do. These correspond to extra specs.
* ``Volume Type``: A group of volume policies.
* ``Extra Specs``: The definition of a volume type. This is a group of
policies. For example, provision type, QOS that will be used to
define a volume at creation time.
* ``Capabilities``: What the current deployed back end in Cinder is able
to do. These correspond to extra specs.
Usage of cinder client
~~~~~~~~~~~~~~~~~~~~~~
@ -28,115 +29,122 @@ When an administrator wants to define new volume types for their
OpenStack cloud, the administrator would fetch a list of ``capabilities``
for a particular back end using the cinder client.
First, get a list of the services::
First, get a list of the services:
$ cinder service-list
+------------------+-------------------+------+---------+-------+------+
| Binary | Host | Zone | Status | State | ... |
+------------------+-------------------+------+---------+-------+------+
| cinder-scheduler | controller | nova | enabled | up | ... |
| cinder-volume | block1@ABC-driver | nova | enabled | up | ... |
+------------------+-------------------+------+---------+-------+------+
.. code-block:: console
$ cinder service-list
+------------------+-------------------+------+---------+-------+------+
| Binary | Host | Zone | Status | State | ... |
+------------------+-------------------+------+---------+-------+------+
| cinder-scheduler | controller | nova | enabled | up | ... |
| cinder-volume | block1@ABC-driver | nova | enabled | up | ... |
+------------------+-------------------+------+---------+-------+------+
With one of the listed hosts, pass that to ``get-capabilities``, then
the administrator can obtain volume stats and also back end ``capabilities``
as listed below.
::
.. code-block:: console
$ cinder get-capabilities block1@ABC-driver
+---------------------+----------------------------------------------+
| Volume stats | Value |
+---------------------+----------------------------------------------+
| description | None |
| display_name | Capabilities of Cinder Vendor ABC driver |
| driver_version | 2.0.0 |
| namespace | OS::Storage::Capabilities::block1@ABC-driver |
| pool_name | None |
| storage_protocol | iSCSI |
| vendor_name | Vendor ABC |
| visibility | pool |
| volume_backend_name | ABC-driver |
+---------------------+----------------------------------------------+
+----------------------+-----------------------------------------------------+
| Backend properties | Value |
+----------------------+-----------------------------------------------------+
| compression | {u'type':u'boolean', u'title':u'Compression', ...} |
| ABC:compression_type | {u'enum':u'['lossy', 'lossless', 'special']', ...} |
| qos | {u'type':u'boolean', u'title':u'QoS', ...} |
| replication | {u'type':u'boolean', u'title':u'Replication', ...} |
| thin_provisioning | {u'type':u'boolean', u'title':u'Thin Provisioning'} |
| ABC:minIOPS | {u'type':u'integer', u'title':u'Minimum IOPS QoS',} |
| ABC:maxIOPS | {u'type':u'integer', u'title':u'Maximum IOPS QoS',} |
| ABC:burstIOPS | {u'type':u'integer', u'title':u'Burst IOPS QoS',..} |
+----------------------+-----------------------------------------------------+
$ cinder get-capabilities block1@ABC-driver
+---------------------+----------------------------------------------+
| Volume stats | Value |
+---------------------+----------------------------------------------+
| description | None |
| display_name | Capabilities of Cinder Vendor ABC driver |
| driver_version | 2.0.0 |
| namespace | OS::Storage::Capabilities::block1@ABC-driver |
| pool_name | None |
| storage_protocol | iSCSI |
| vendor_name | Vendor ABC |
| visibility | pool |
| volume_backend_name | ABC-driver |
+---------------------+----------------------------------------------+
+----------------------+-----------------------------------------------------+
| Backend properties | Value |
+----------------------+-----------------------------------------------------+
| compression | {u'type':u'boolean', u'title':u'Compression', ...} |
| ABC:compression_type | {u'enum':u'['lossy', 'lossless', 'special']', ...} |
| qos | {u'type':u'boolean', u'title':u'QoS', ...} |
| replication | {u'type':u'boolean', u'title':u'Replication', ...} |
| thin_provisioning | {u'type':u'boolean', u'title':u'Thin Provisioning'} |
| ABC:minIOPS | {u'type':u'integer', u'title':u'Minimum IOPS QoS',} |
| ABC:maxIOPS | {u'type':u'integer', u'title':u'Maximum IOPS QoS',} |
| ABC:burstIOPS | {u'type':u'integer', u'title':u'Burst IOPS QoS',..} |
+----------------------+-----------------------------------------------------+
Usage of REST API
~~~~~~~~~~~~~~~~~
New endpoint to ``get capabilities`` list for specific storage back end
is also available. For more details, refer to the Block Storage API reference.
API request::
API request:
GET /v2/{tenant_id}/capabilities/{hostname}
.. code-block:: console
Example of return value::
GET /v2/{tenant_id}/capabilities/{hostname}
{
"namespace": "OS::Storage::Capabilities::block1@ABC-driver",
"volume_backend_name": "ABC-driver",
"pool_name": "pool",
"driver_version": "2.0.0",
"storage_protocol": "iSCSI",
"display_name": "Capabilities of Cinder Vendor ABC driver",
"description": "None",
"visibility": "public",
"properties": {
"thin_provisioning": {
"title": "Thin Provisioning",
"description": "Sets thin provisioning.",
"type": "boolean"
},
"compression": {
"title": "Compression",
"description": "Enables compression.",
"type": "boolean"
},
"ABC:compression_type": {
"title": "Compression type",
"description": "Specifies compression type.",
"type": "string",
"enum": [
"lossy", "lossless", "special"
]
},
"replication": {
"title": "Replication",
"description": "Enables replication.",
"type": "boolean"
},
"qos": {
"title": "QoS",
"description": "Enables QoS.",
"type": "boolean"
},
"ABC:minIOPS": {
"title": "Minimum IOPS QoS",
"description": "Sets minimum IOPS if QoS is enabled.",
"type": "integer"
},
"ABC:maxIOPS": {
"title": "Maximum IOPS QoS",
"description": "Sets maximum IOPS if QoS is enabled.",
"type": "integer"
},
"ABC:burstIOPS": {
"title": "Burst IOPS QoS",
"description": "Sets burst IOPS if QoS is enabled.",
"type": "integer"
},
Example of return value:
.. code-block:: json
{
"namespace": "OS::Storage::Capabilities::block1@ABC-driver",
"volume_backend_name": "ABC-driver",
"pool_name": "pool",
"driver_version": "2.0.0",
"storage_protocol": "iSCSI",
"display_name": "Capabilities of Cinder Vendor ABC driver",
"description": "None",
"visibility": "public",
"properties": {
"thin_provisioning": {
"title": "Thin Provisioning",
"description": "Sets thin provisioning.",
"type": "boolean"
},
"compression": {
"title": "Compression",
"description": "Enables compression.",
"type": "boolean"
},
"ABC:compression_type": {
"title": "Compression type",
"description": "Specifies compression type.",
"type": "string",
"enum": [
"lossy", "lossless", "special"
]
},
"replication": {
"title": "Replication",
"description": "Enables replication.",
"type": "boolean"
},
"qos": {
"title": "QoS",
"description": "Enables QoS.",
"type": "boolean"
},
"ABC:minIOPS": {
"title": "Minimum IOPS QoS",
"description": "Sets minimum IOPS if QoS is enabled.",
"type": "integer"
},
"ABC:maxIOPS": {
"title": "Maximum IOPS QoS",
"description": "Sets maximum IOPS if QoS is enabled.",
"type": "integer"
},
"ABC:burstIOPS": {
"title": "Burst IOPS QoS",
"description": "Sets burst IOPS if QoS is enabled.",
"type": "integer"
},
}
}
}
Usage of volume type access extension
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -147,72 +155,86 @@ these volumes. An administrator/operator can then define private volume types
using cinder client.
Volume type access extension adds the ability to manage volume type access.
Volume types are public by default. Private volume types can be created by
setting the 'is_public' Boolean field to 'False' at creation time. Access to a
setting the ``is_public`` Boolean field to ``False`` at creation time. Access to a
private volume type can be controlled by adding or removing a project from it.
Private volume types without projects are only visible by users with the
admin role/context.
Create a public volume type by setting 'is_public' field to 'True'::
Create a public volume type by setting ``is_public`` field to ``True``:
$ cinder type-create --description test1 --is-public True vol_Type1
+--------------------------------------+-----------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-----------+-------------+-----------+
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 | test1 | True |
+--------------------------------------+-----------+-------------+-----------+
.. code-block:: console
Create a private volume type by setting 'is_public' field to 'False'::
$ cinder type-create --description test1 --is-public True vol_Type1
+--------------------------------------+-----------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-----------+-------------+-----------+
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 | test1 | True |
+--------------------------------------+-----------+-------------+-----------+
$ cinder type-create --description test2 --is-public False vol_Type2
+--------------------------------------+-----------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-----------+-------------+-----------+
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-----------+-------------+-----------+
Create a private volume type by setting ``is_public`` field to ``False``:
Get a list of the volume types::
.. code-block:: console
$ cinder type-list
+--------------------------------------+-------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-------------+-------------+-----------+
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 | test1 | True |
| 87e5be6f-9491-4ea5-9906-9ac56494bb91 | lvmdriver-1 | - | True |
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-------------+-------------+-----------+
$ cinder type-create --description test2 --is-public False vol_Type2
+--------------------------------------+-----------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-----------+-------------+-----------+
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-----------+-------------+-----------+
Get a list of the projects::
Get a list of the volume types:
$ openstack project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 4105ead90a854100ab6b121266707f2b | alt_demo |
| 4a22a545cedd4fcfa9836eb75e558277 | admin |
| 71f9cdb1a3ab4b8e8d07d347a2e146bb | service |
| c4860af62ffe465e99ed1bc08ef6082e | demo |
| e4b648ba5108415cb9e75bff65fa8068 | invisible_to_admin |
+----------------------------------+--------------------+
.. code-block:: console
Add volume type access for the given demo project, using its project-id::
$ cinder type-list
+--------------------------------------+-------------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-------------+-------------+-----------+
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1 | test1 | True |
| 87e5be6f-9491-4ea5-9906-9ac56494bb91 | lvmdriver-1 | - | True |
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2 | test2 | False |
+--------------------------------------+-------------+-------------+-----------+
$ cinder type-access-add --volume-type vol_Type2 --project-id c4860af62ffe465e99ed1bc08ef6082e
Get a list of the projects:
List the access information about the given volume type::
.. code-block:: console
$ cinder type-access-list --volume-type vol_Type2
+--------------------------------------+----------------------------------+
| Volume_type_ID | Project_ID |
+--------------------------------------+----------------------------------+
| fd508846-213f-4a07-aaf2-40518fb9a23f | c4860af62ffe465e99ed1bc08ef6082e |
+--------------------------------------+----------------------------------+
$ openstack project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 4105ead90a854100ab6b121266707f2b | alt_demo |
| 4a22a545cedd4fcfa9836eb75e558277 | admin |
| 71f9cdb1a3ab4b8e8d07d347a2e146bb | service |
| c4860af62ffe465e99ed1bc08ef6082e | demo |
| e4b648ba5108415cb9e75bff65fa8068 | invisible_to_admin |
+----------------------------------+--------------------+
Remove volume type access for the given project::
Add volume type access for the given demo project, using its project-id:
$ cinder type-access-remove --volume-type vol_Type2 --project-id
c4860af62ffe465e99ed1bc08ef6082e
$ cinder type-access-list --volume-type vol_Type2
+----------------+------------+
| Volume_type_ID | Project_ID |
+----------------+------------+
+----------------+------------+
.. code-block:: console
$ cinder type-access-add --volume-type vol_Type2 --project-id c4860af62ffe465e99ed1bc08ef6082e
List the access information about the given volume type:
.. code-block:: console
$ cinder type-access-list --volume-type vol_Type2
+--------------------------------------+----------------------------------+
| Volume_type_ID | Project_ID |
+--------------------------------------+----------------------------------+
| fd508846-213f-4a07-aaf2-40518fb9a23f | c4860af62ffe465e99ed1bc08ef6082e |
+--------------------------------------+----------------------------------+
Remove volume type access for the given project:
.. code-block:: console
$ cinder type-access-remove --volume-type vol_Type2 --project-id
c4860af62ffe465e99ed1bc08ef6082e
$ cinder type-access-list --volume-type vol_Type2
+----------------+------------+
| Volume_type_ID | Project_ID |
+----------------+------------+
+----------------+------------+

View File

@ -44,10 +44,13 @@ OpenStack Block Storage to use GlusterFS shares:
#. Log in as ``root`` to the GlusterFS server.
#. Set each Gluster volume to use the same UID and GID as the ``cinder`` user::
#. Set each Gluster volume to use the same UID and GID as the ``cinder`` user:
.. code-block:: console
# gluster volume set VOL_NAME storage.owner-uid CINDER_UID
# gluster volume set VOL_NAME storage.owner-gid CINDER_GID
# gluster volume set VOL_NAME storage.owner-uid CINDER_UID
# gluster volume set VOL_NAME storage.owner-gid CINDER_GID
Where:
@ -63,20 +66,25 @@ OpenStack Block Storage to use GlusterFS shares:
most distributions.
#. Configure each Gluster volume to accept ``libgfapi`` connections.
To do this, set each Gluster volume to allow insecure ports::
To do this, set each Gluster volume to allow insecure ports:
# gluster volume set VOL_NAME server.allow-insecure on
.. code-block:: console
# gluster volume set VOL_NAME server.allow-insecure on
#. Enable client connections from unprivileged ports. To do this,
add the following line to :file:`/etc/glusterfs/glusterd.vol`::
add the following line to ``/etc/glusterfs/glusterd.vol``:
option rpc-auth-allow-insecure on
.. code-block:: ini
#. Restart the ``glusterd`` service::
option rpc-auth-allow-insecure on
# service glusterd restart
#. Restart the ``glusterd`` service:
.. code-block:: console
# service glusterd restart
|
**Configure Block Storage to use a GlusterFS back end**
@ -84,14 +92,17 @@ After you configure the GlusterFS service, complete these steps:
#. Log in as ``root`` to the system hosting the Block Storage service.
#. Create a text file named :file:`glusterfs` in :file:`/etc/cinder/`.
#. Create a text file named ``glusterfs`` in ``/etc/cinder/`` directory.
#. Add an entry to :file:`/etc/cinder/glusterfs` for each GlusterFS
#. Add an entry to ``/etc/cinder/glusterfs`` for each GlusterFS
share that OpenStack Block Storage should use for back end storage.
Each entry should be a separate line, and should use the following
format::
format:
.. code-block:: ini
HOST:/VOL_NAME
HOST:/VOL_NAME
Where:
@ -103,32 +114,40 @@ After you configure the GlusterFS service, complete these steps:
|
Optionally, if your environment requires additional mount options for
a share, you can add them to the share's entry::
a share, you can add them to the share's entry:
HOST:/VOL_NAME -o OPTIONS
.. code-block:: ini
HOST:/VOL_NAME -o OPTIONS
Replace OPTIONS with a comma-separated list of mount options.
#. Set :file:`/etc/cinder/glusterfs` to be owned by the root user
and the ``cinder`` group::
#. Set ``/etc/cinder/glusterfs`` to be owned by the root user
and the ``cinder`` group:
# chown root:cinder /etc/cinder/glusterfs
.. code-block:: console
#. Set :file:`/etc/cinder/glusterfs` to be readable by members of
the ``cinder`` group::
# chown root:cinder /etc/cinder/glusterfs
# chmod 0640 /etc/cinder/glusterfs
#. Set ``/etc/cinder/glusterfs`` to be readable by members of
the ``cinder`` group:
#. Configure OpenStack Block Storage to use the :file:`/etc/cinder/glusterfs`
file created earlier. To do so, open the :file:`/etc/cinder/cinder.conf`
.. code-block:: console
# chmod 0640 /etc/cinder/glusterfs
#. Configure OpenStack Block Storage to use the ``/etc/cinder/glusterfs``
file created earlier. To do so, open the ``/etc/cinder/cinder.conf``
configuration file and set the ``glusterfs_shares_config`` configuration
key to :file:`/etc/cinder/glusterfs`.
key to ``/etc/cinder/glusterfs``.
On distributions that include openstack-config, you can configure this
by running the following command instead::
by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT glusterfs_shares_config /etc/cinder/glusterfs
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT glusterfs_shares_config /etc/cinder/glusterfs
The following distributions include ``openstack-config``:
@ -146,26 +165,20 @@ After you configure the GlusterFS service, complete these steps:
#. Configure OpenStack Block Storage to use the correct volume driver,
namely ``cinder.volume.drivers.glusterfs.GlusterfsDriver``. To do so,
open the :file:`/etc/cinder/cinder.conf` configuration file and set
open the ``/etc/cinder/cinder.conf`` configuration file and set
the ``volume_driver`` configuration key to
``cinder.volume.drivers.glusterfs.GlusterfsDriver``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
#. You can now restart the service to apply the configuration.
To restart the ``cinder`` volume service on CentOS, Fedora, openSUSE, Red
Hat Enterprise Linux, or SUSE Linux Enterprise, run::
# service openstack-cinder-volume restart
To restart the ``cinder`` volume service on Ubuntu or Debian, run::
# service cinder-volume restart
OpenStack Block Storage is now configured to use a GlusterFS back end.
@ -174,9 +187,11 @@ OpenStack Block Storage is now configured to use a GlusterFS back end.
If a client host has SELinux enabled, the ``virt_use_fusefs`` boolean
should also be enabled if the host requires access to GlusterFS volumes
on an instance. To enable this Boolean, run the following command as
the ``root`` user::
the ``root`` user:
# setsebool -P virt_use_fusefs on
.. code-block:: console
# setsebool -P virt_use_fusefs on
This command also makes the Boolean persistent across reboots. Run
this command on all client hosts that require access to GlusterFS

View File

@ -5,7 +5,7 @@ Gracefully remove a GlusterFS volume from usage
===============================================
Configuring the ``cinder`` volume service to use GlusterFS involves creating a
shares file (for example, :file:`/etc/cinder/glusterfs`). This shares file
shares file (for example, ``/etc/cinder/glusterfs``). This shares file
lists each GlusterFS volume (with its corresponding storage server) that
the ``cinder`` volume service can use for back end storage.
@ -13,15 +13,6 @@ To remove a GlusterFS volume from usage as a back end, delete the volume's
corresponding entry from the shares file. After doing so, restart the Block
Storage services.
To restart the Block Storage services on CentOS, Fedora, openSUSE,
Red Hat Enterprise Linux, or SUSE Linux Enterprise, run::
# for i in api scheduler volume; do service openstack-cinder-$i restart; done
To restart the Block Storage services on Ubuntu or Debian, run::
# for i in api scheduler volume; do service cinder-${i} restart; done
Restarting the Block Storage services will prevent the ``cinder`` volume
service from exporting the deleted GlusterFS volume. This will prevent any
instances from mounting the volume from that point onwards.

View File

@ -28,15 +28,19 @@ protects normal users from having to see the cached image-volumes, but does
not make them globally hidden.
To enable the Block Storage services to have access to an Internal Tenant, set
the following options in the :file:`cinder.conf` file::
the following options in the ``cinder.conf`` file:
cinder_internal_tenant_project_id = PROJECT_ID
cinder_internal_tenant_user_id = USER_ID
.. code-block:: ini
Example :file:`cinder.conf` configuration file::
cinder_internal_tenant_project_id = PROJECT_ID
cinder_internal_tenant_user_id = USER_ID
cinder_internal_tenant_project_id = b7455b8974bb4064ad247c8f375eae6c
cinder_internal_tenant_user_id = f46924c112a14c80ab0a24a613d95eef
Example ``cinder.conf`` configuration file:
.. code-block:: ini
cinder_internal_tenant_project_id = b7455b8974bb4064ad247c8f375eae6c
cinder_internal_tenant_user_id = f46924c112a14c80ab0a24a613d95eef
.. note::
@ -48,26 +52,32 @@ Configure the Image-Volume cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable the Image-Volume cache, set the following configuration option in
:file:`cinder.conf`::
``cinder.conf``:
image_volume_cache_enabled = True
.. code-block:: ini
image_volume_cache_enabled = True
This can be scoped per back end definition or in the default options.
There are optional configuration settings that can limit the size of the cache.
These can also be scoped per back end or in the default options in
:file:`cinder.conf`::
``cinder.conf``:
image_volume_cache_max_size_gb = SIZE_GB
image_volume_cache_max_count = MAX_COUNT
.. code-block:: ini
image_volume_cache_max_size_gb = SIZE_GB
image_volume_cache_max_count = MAX_COUNT
By default they will be set to 0, which means unlimited.
For example, a configuration which would limit the max size to 200 GB and 50
cache entries will be configured as::
cache entries will be configured as:
image_volume_cache_max_size_gb = 200
image_volume_cache_max_count = 50
.. code-block:: ini
image_volume_cache_max_size_gb = 200
image_volume_cache_max_count = 50
Notifications
~~~~~~~~~~~~~

View File

@ -23,7 +23,7 @@ Enable multiple-storage back ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable a multiple-storage back ends, you must set the
`enabled_backends` flag in the :file:`cinder.conf` file.
`enabled_backends` flag in the ``cinder.conf`` file.
This flag defines the names (separated by a comma) of the configuration
groups for the different back ends: one name is associated to one
configuration group for a back end (such as, ``[lvmdriver-1]``).
@ -34,14 +34,16 @@ configuration group for a back end (such as, ``[lvmdriver-1]``).
.. note::
After setting the `enabled_backends` flag on an existing cinder
After setting the ``enabled_backends`` flag on an existing cinder
service, and restarting the Block Storage services, the original ``host``
service is replaced with a new host service. The new service appears
with a name like ``host@backend``. Use::
with a name like ``host@backend``. Use:
$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND
.. code-block:: console
to convert current block devices to the new hostname.
$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND
to convert current block devices to the new host name.
The options for a configuration group must be defined in the group
(or default options are used). All the standard Block Storage
@ -110,43 +112,50 @@ multiple-storage back ends. The filter scheduler:
The scheduler uses filters and weights to pick the best back end to
handle the request. The scheduler uses volume types to explicitly create
volumes on specific back ends.
volumes on specific back ends. For more information about filter and weighing,
see :ref:`filter_weigh_scheduler`.
.. TODO: when filter/weighing scheduler documentation will be up, a ref
should be added here
Volume type
~~~~~~~~~~~
Before using it, a volume type has to be declared to Block Storage.
This can be done by the following command::
This can be done by the following command:
$ cinder --os-username admin --os-tenant-name admin type-create lvm
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-create lvm
Then, an extra-specification has to be created to link the volume
type to a back end name. Run this command::
type to a back end name. Run this command:
$ cinder --os-username admin --os-tenant-name admin type-key lvm set \
volume_backend_name=LVM_iSCSI
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-key lvm set \
volume_backend_name=LVM_iSCSI
This example creates a ``lvm`` volume type with
``volume_backend_name=LVM_iSCSI`` as extra-specifications.
Create another volume type::
Create another volume type:
$ cinder --os-username admin --os-tenant-name admin type-create lvm_gold
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin type-key lvm_gold set \
volume_backend_name=LVM_iSCSI_b
$ cinder --os-username admin --os-tenant-name admin type-create lvm_gold
$ cinder --os-username admin --os-tenant-name admin type-key lvm_gold set \
volume_backend_name=LVM_iSCSI_b
This second volume type is named ``lvm_gold`` and has ``LVM_iSCSI_b`` as
back end name.
.. note::
To list the extra-specifications, use this command::
To list the extra-specifications, use this command:
$ cinder --os-username admin --os-tenant-name admin extra-specs-list
.. code-block:: console
$ cinder --os-username admin --os-tenant-name admin extra-specs-list
.. note::
@ -162,15 +171,15 @@ When you create a volume, you must specify the volume type.
The extra-specifications of the volume type are used to determine which
back end has to be used.
::
.. code-block:: console
$ cinder create --volume_type lvm --display_name test_multi_backend 1
$ cinder create --volume_type lvm --display_name test_multi_backend 1
Considering the :file:`cinder.conf` described previously, the scheduler
Considering the ``cinder.conf`` described previously, the scheduler
creates this volume on ``lvmdriver-1`` or ``lvmdriver-2``.
::
.. code-block:: console
$ cinder create --volume_type lvm_gold --display_name test_multi_backend 1
$ cinder create --volume_type lvm_gold --display_name test_multi_backend 1
This second volume is created on ``lvmdriver-3``.

View File

@ -29,14 +29,17 @@ that hosts the ``cinder`` volume service.
#. Log in as ``root`` to the system hosting the ``cinder`` volume
service.
#. Create a text file named :file:`nfsshares` in :file:`/etc/cinder/`.
#. Create a text file named ``nfsshares`` in the ``/etc/cinder/`` directory.
#. Add an entry to :file:`/etc/cinder/nfsshares` for each NFS share
#. Add an entry to ``/etc/cinder/nfsshares`` for each NFS share
that the ``cinder`` volume service should use for back end storage.
Each entry should be a separate line, and should use the following
format:
``HOST:SHARE``
.. code-block:: ini
HOST:SHARE
Where:
@ -46,27 +49,33 @@ that hosts the ``cinder`` volume service.
|
#. Set :file:`/etc/cinder/nfsshares` to be owned by the ``root`` user and
the ``cinder`` group::
#. Set ``/etc/cinder/nfsshares`` to be owned by the ``root`` user and
the ``cinder`` group:
# chown root:cinder /etc/cinder/nfsshares
.. code-block:: console
#. Set :file:`/etc/cinder/nfsshares` to be readable by members of the
cinder group::
# chown root:cinder /etc/cinder/nfsshares
# chmod 0640 /etc/cinder/nfsshares
#. Set ``/etc/cinder/nfsshares`` to be readable by members of the
cinder group:
#. Configure the cinder volume service to use the
:file:`/etc/cinder/nfsshares` file created earlier. To do so, open
the :file:`/etc/cinder/cinder.conf` configuration file and set
.. code-block:: console
# chmod 0640 /etc/cinder/nfsshares
#. Configure the ``cinder`` volume service to use the
``/etc/cinder/nfsshares`` file created earlier. To do so, open
the ``/etc/cinder/cinder.conf`` configuration file and set
the ``nfs_shares_config`` configuration key
to :file:`/etc/cinder/nfsshares`.
to ``/etc/cinder/nfsshares``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_shares_config /etc/cinder/nfsshares
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_shares_config /etc/cinder/nfsshares
The following distributions include openstack-config:
@ -80,48 +89,41 @@ that hosts the ``cinder`` volume service.
* SUSE Linux Enterprise
|
#. Optionally, provide any additional NFS mount options required in
your environment in the ``nfs_mount_options`` configuration key
of :file:`/etc/cinder/cinder.conf`. If your NFS shares do not
of ``/etc/cinder/cinder.conf``. If your NFS shares do not
require any additional mount options (or if you are unsure),
skip this step.
On distributions that include ``openstack-config``, you can
configure this by running the following command instead::
configure this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_mount_options OPTIONS
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_mount_options OPTIONS
Replace OPTIONS with the mount options to be used when accessing
NFS shares. See the manual page for NFS for more information on
available mount options (:command:`man nfs`).
#. Configure the ``cinder`` volume service to use the correct volume
driver, namely cinder.volume.drivers.nfs.NfsDriver. To do so,
open the :file:`/etc/cinder/cinder.conf` configuration file and
driver, namely ``cinder.volume.drivers.nfs.NfsDriver``. To do so,
open the ``/etc/cinder/cinder.conf`` configuration file and
set the volume_driver configuration key
to ``cinder.volume.drivers.nfs.NfsDriver``.
On distributions that include ``openstack-config``, you can configure
this by running the following command instead::
this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
#. You can now restart the service to apply the configuration.
To restart the ``cinder`` volume service on CentOS, Fedora,
openSUSE, Red Hat Enterprise Linux, or SUSE Linux Enterprise,
run::
# service openstack-cinder-volume restart
To restart the ``cinder`` volume service on Ubuntu or Debian, run::
# service cinder-volume restart
.. note::
The ``nfs_sparsed_volumes`` configuration key determines whether
@ -134,22 +136,26 @@ that hosts the ``cinder`` volume service.
to increased delays in volume creation.
However, should you choose to set ``nfs_sparsed_volumes`` to
false, you can do so directly in :file:`/etc/cinder/cinder.conf`.
``false``, you can do so directly in ``/etc/cinder/cinder.conf``.
On distributions that include ``openstack-config``, you can
configure this by running the following command instead::
configure this by running the following command instead:
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_sparsed_volumes false
.. code-block:: console
# openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_sparsed_volumes false
.. warning::
If a client host has SELinux enabled, the ``virt_use_nfs``
boolean should also be enabled if the host requires access to
NFS volumes on an instance. To enable this boolean, run the
following command as the ``root`` user::
following command as the ``root`` user:
# setsebool -P virt_use_nfs on
.. code-block:: console
# setsebool -P virt_use_nfs on
This command also makes the boolean persistent across reboots.
Run this command on all client hosts that require access to NFS

View File

@ -14,7 +14,7 @@ Configure oversubscription settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To support oversubscription in thin provisioning, a flag
``max_over_subscription_ratio`` is introduced into :file:`cinder.conf`.
``max_over_subscription_ratio`` is introduced into ``cinder.conf``.
This is a float representation of the oversubscription ratio when thin
provisioning is involved. Default ratio is 20.0, meaning provisioned
capacity can be 20 times of the total physical capacity. A ratio of 10.5
@ -25,23 +25,23 @@ instead.
.. note::
``max_over_subscription_ratio`` can be configured for each back end when
multiple-storage back ends are enabled. It is provided as a reference
implementation and is used by the LVM driver. However, it is not a
requirement for a driver to use this option from :file:`cinder.conf`.
``max_over_subscription_ratio`` can be configured for each back end when
multiple-storage back ends are enabled. It is provided as a reference
implementation and is used by the LVM driver. However, it is not a
requirement for a driver to use this option from ``cinder.conf``.
``max_over_subscription_ratio`` is for configuring a back end. For a
driver that supports multiple pools per back end, it can report this
ratio for each pool. The LVM driver does not support multiple pools.
``max_over_subscription_ratio`` is for configuring a back end. For a
driver that supports multiple pools per back end, it can report this
ratio for each pool. The LVM driver does not support multiple pools.
The existing ``reserved_percentage`` flag is used to prevent over provisioning.
This flag represents the percentage of the back-end capacity that is reserved.
.. note::
There is a change on how ``reserved_percentage`` is used. It was measured
against the free capacity in the past. Now it is measured against the total
capacity.
There is a change on how ``reserved_percentage`` is used. It was measured
against the free capacity in the past. Now it is measured against the total
capacity.
Capabilities
~~~~~~~~~~~~
@ -58,14 +58,14 @@ Drivers can report the following capabilities for a back end or a pool:
Where ``PROVISIONED_CAPACITY`` is the apparent allocated space indicating
how much capacity has been provisioned and ``MAX_RATIO`` is the maximum
oversubscription ratio. For the LVM driver, it is
``max_over_subscription_ratio`` in :file:`cinder.conf`.
``max_over_subscription_ratio`` in ``cinder.conf``.
Two capabilities are added here to allow a back end or pool to claim support
for thin provisioning, or thick provisioning, or both.
The LVM driver reports ``thin_provisioning_support=True`` and
``thick_provisioning_support=False`` if the ``lvm_type`` flag in
:file:`cinder.conf` is ``thin``. Otherwise it reports
``cinder.conf`` is ``thin``. Otherwise it reports
``thin_provisioning_support=False`` and ``thick_provisioning_support=True``.
Volume type extra specs
@ -81,8 +81,8 @@ have the following extra specs defined:
.. note::
``capabilities`` scope key before ``thin_provisioning_support`` and
``thick_provisioning_support`` is not required. So the following works too:
``capabilities`` scope key before ``thin_provisioning_support`` and
``thick_provisioning_support`` is not required. So the following works too:
.. code-block:: ini
@ -105,7 +105,7 @@ data loss during disaster recovery.
To enable replication when creating volume types, configure the cinder
volume with ``capabilities:replication="<is> True"``.
Each volume created with the replication capability set to `True`
Each volume created with the replication capability set to ``True``
generates a copy of the volume on a storage back end.
One use case for replication involves an OpenStack cloud environment
@ -118,7 +118,7 @@ Both data centers include storage back ends.
Depending on the storage requirements, there can be one or two cinder
hosts. The cloud administrator accesses the
:file:`/etc/cinder/cinder.conf` configuration file and sets
``/etc/cinder/cinder.conf`` configuration file and sets
``capabilities:replication="<is> True"``.
If one data center experiences a service failure, cloud administrators

View File

@ -15,14 +15,14 @@ Configure volume copy bandwidth limit
To configure the volume copy bandwidth limit, set the
``volume_copy_bps_limit`` option in the configuration groups for each
back end in the :file:`cinder.conf` file. This option takes the integer of
back end in the ``cinder.conf`` file. This option takes the integer of
maximum bandwidth allowed for volume data copy in byte per second. If
this option is set to ``0``, the rate-limit is disabled.
While multiple volume data copy operations are running in the same back
end, the specified bandwidth is divided to each copy.
Example :file:`cinder.conf` configuration file to limit volume copy bandwidth
Example ``cinder.conf`` configuration file to limit volume copy bandwidth
of ``lvmdriver-1`` up to 100 MiB/s:
.. code-block:: ini

View File

@ -19,31 +19,41 @@ Configure the Volume-backed image
Volume-backed image feature requires locations information from the cinder
store of the Image service. To enable the Image service to use the cinder
store, add ``cinder`` to the ``stores`` option in the ``glance_store`` section
of the :file:`glance-api.conf` file::
of the ``glance-api.conf`` file:
stores = file, http, swift, cinder
.. code-block:: ini
stores = file, http, swift, cinder
To expose locations information, set the following options in the ``DEFAULT``
section of the :file:`glance-api.conf` file::
section of the ``glance-api.conf`` file:
show_multiple_locations = True
.. code-block:: ini
show_multiple_locations = True
To enable the Block Storage services to create a new volume by cloning Image-
Volume, set the following options in the ``DEFAULT`` section of the
:file:`cinder.conf` file. For example::
``cinder.conf`` file. For example:
glance_api_version = 2
allowed_direct_url_schemes = cinder
.. code-block:: ini
glance_api_version = 2
allowed_direct_url_schemes = cinder
To enable the :command:`cinder upload-to-image` command to create an image
that refers an Image-Volume, set the following options in each back-end
section of the :file:`cinder.conf` file::
that refers an ``Image-Volume``, set the following options in each back-end
section of the ``cinder.conf`` file:
image_upload_use_cinder_backend = True
.. code-block:: ini
image_upload_use_cinder_backend = True
By default, the :command:`upload-to-image` command creates the Image-Volume in
the current tenant. To store the Image-Volume into the internal tenant, set the
following options in each back-end section of the :file:`cinder.conf` file::
following options in each back-end section of the ``cinder.conf`` file:
.. code-block:: ini
image_upload_use_internal_tenant = True
@ -52,7 +62,9 @@ Creating a Volume-backed image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To register an existing volume as a new Volume-backed image, use the following
commands::
commands:
.. code-block:: console
$ glance image-create --disk-format raw --container-format bare --name <name>
@ -62,11 +74,14 @@ If the ``image_upload_use_cinder_backend`` option is enabled, the following
command creates a new Image-Volume by cloning the specified volume and then
registers its location to a new image. The disk format and the container format
must be raw and bare (default). Otherwise, the image is uploaded to the default
store of the Image service.::
store of the Image service.
$ cinder upload-to-image <volume> <image-name>
.. code-block:: console
$ cinder upload-to-image <volume> <image-name>
.. note::
Currently, the cinder store of the Image services does not support uploading
and downloading of image data. By this limitation, Volume-backed images can
only be used to create a new volume.

View File

@ -4,16 +4,18 @@
Back up and restore volumes
===========================
The **cinder** command-line interface provides the tools for creating a
The ``cinder`` command-line interface provides the tools for creating a
volume backup. You can restore a volume from a backup as long as the
backup's associated database information (or backup metadata) is intact
in the Block Storage database.
Run this command to create a backup of a volume::
Run this command to create a backup of a volume:
$ cinder backup-create [--incremental] [--force] VOLUME
.. code-block:: console
Where *VOLUME* is the name or ID of the volume, ``incremental`` is
$ cinder backup-create [--incremental] [--force] VOLUME
Where ``VOLUME`` is the name or ID of the volume, ``incremental`` is
a flag that indicates whether an incremental backup should be performed,
and ``force`` is a flag that allows or disallows backup of a volume
when the volume is attached to an instance.
@ -30,11 +32,12 @@ flag is False by default.
.. note::
The ``incremental`` and ``force`` flags are only available for block
storage API v2. You have to specify [--os-volume-api-version 2] in the
**cinder** command-line interface to use this parameter.
The ``incremental`` and ``force`` flags are only available for block
storage API v2. You have to specify ``[--os-volume-api-version 2]`` in the
``cinder`` command-line interface to use this parameter.
.. note::
The ``force`` flag is new in OpenStack Liberty.
The incremental backup is based on a parent backup which is an existing
@ -44,16 +47,16 @@ or an incremental backup depending on the timestamp.
.. note::
The first backup of a volume has to be a full backup. Attempting to do
an incremental backup without any existing backups will fail.
There is an ``is_incremental`` flag that indicates whether a backup is
incremental when showing details on the backup.
Another flag, ``has_dependent_backups``, returned when showing backup
details, will indicate whether the backup has dependent backups.
If it is true, attempting to delete this backup will fail.
The first backup of a volume has to be a full backup. Attempting to do
an incremental backup without any existing backups will fail.
There is an ``is_incremental`` flag that indicates whether a backup is
incremental when showing details on the backup.
Another flag, ``has_dependent_backups``, returned when showing backup
details, will indicate whether the backup has dependent backups.
If it is ``true``, attempting to delete this backup will fail.
A new configure option ``backup_swift_block_size`` is introduced into
:file:`cinder.conf` for the default Swift backup driver. This is the size in
``cinder.conf`` for the default Swift backup driver. This is the size in
bytes that changes are tracked for incremental backups. The existing
``backup_swift_object_size`` option, the size in bytes of Swift backup
objects, has to be a multiple of ``backup_swift_block_size``. The default
@ -66,9 +69,11 @@ back end. This option enables or disables the timer. It is enabled by default
to send the periodic progress notifications to the Telemetry service.
This command also returns a backup ID. Use this backup ID when restoring
the volume::
the volume:
$ cinder backup-restore BACKUP_ID
.. code-block:: console
$ cinder backup-restore BACKUP_ID
When restoring from a full backup, it is a full restore.
@ -79,42 +84,42 @@ laying on top of it in order.
You can view a backup list with the :command:`cinder backup-list`
command. Optional arguments to clarify the status of your backups
include: running ``--name``, ``--status``, and ``--volume-id`` to filter
through backups by the specified name, status, or volume-id. Search
with ``--all-tenants`` for details of the tenants associated
with the listed backups.
include: running :option:`--name`, :option:`--status`, and
:option:`--volume-id` to filter through backups by the specified name,
status, or volume-id. Search with :option:`--all-tenants` for details of the
tenants associated with the listed backups.
Because volume backups are dependent on the Block Storage database, you must
also back up your Block Storage database regularly to ensure data recovery.
.. note::
Alternatively, you can export and save the metadata of selected volume
backups. Doing so precludes the need to back up the entire Block Storage
database. This is useful if you need only a small subset of volumes to
survive a catastrophic database failure.
Alternatively, you can export and save the metadata of selected volume
backups. Doing so precludes the need to back up the entire Block Storage
database. This is useful if you need only a small subset of volumes to
survive a catastrophic database failure.
If you specify a UUID encryption key when setting up the volume
specifications, the backup metadata ensures that the key will remain valid
when you back up and restore the volume.
If you specify a UUID encryption key when setting up the volume
specifications, the backup metadata ensures that the key will remain valid
when you back up and restore the volume.
For more information about how to export and import volume backup metadata,
see the section called :ref:`volume_backups_export_import`.
For more information about how to export and import volume backup metadata,
see the section called :ref:`volume_backups_export_import`.
By default, the swift object store is used for the backup repository.
If instead you want to use an NFS export as the backup repository, add the
following configuration options to the ``[DEFAULT]`` section of the
:file:`cinder.conf` file and restart the Block Storage services:
``cinder.conf`` file and restart the Block Storage services:
.. code-block:: ini
backup_driver = cinder.backup.drivers.nfs
backup_share = HOST:EXPORT_PATH
For the ``backup_share`` option, replace *HOST* with the DNS resolvable
For the ``backup_share`` option, replace ``HOST`` with the DNS resolvable
host name or the IP address of the storage server for the NFS share, and
*EXPORT_PATH* with the path to that share. If your environment requires
``EXPORT_PATH`` with the path to that share. If your environment requires
that non-default mount options be specified for the share, set these as
follows:
@ -122,7 +127,7 @@ follows:
backup_mount_options = MOUNT_OPTIONS
*MOUNT_OPTIONS* is a comma-separated string of NFS mount options as detailed
``MOUNT_OPTIONS`` is a comma-separated string of NFS mount options as detailed
in the NFS man page.
There are several other options whose default values may be overridden as
@ -153,6 +158,8 @@ states due to problems like the database or rabbitmq being down. In situations
like these resetting the state of the backup can restore it to a functional
status.
Run this command to restore the state of a backup::
Run this command to restore the state of a backup:
.. code-block:: console
$ cinder backup-reset-state [--state STATE] BACKUP_ID-1 BACKUP_ID-2 ...

View File

@ -16,11 +16,13 @@ the database used by the Block Storage service.
You can, however, export the metadata of a volume backup. To do so, run
this command as an OpenStack ``admin`` user (presumably, after creating
a volume backup)::
a volume backup):
$ cinder backup-export BACKUP_ID
.. code-block:: console
Where *BACKUP_ID* is the volume backup's ID. This command should return the
$ cinder backup-export BACKUP_ID
Where ``BACKUP_ID`` is the volume backup's ID. This command should return the
backup's corresponding database information as encoded string metadata.
Exporting and storing this encoded string metadata allows you to completely
@ -44,11 +46,13 @@ import the backup metadata to the Block Storage database and then restore
the backup.
To import backup metadata, run the following command as an OpenStack
``admin``::
``admin``:
$ cinder backup-import METADATA
.. code-block:: console
Where *METADATA* is the backup metadata exported earlier.
$ cinder backup-import METADATA
Where ``METADATA`` is the backup metadata exported earlier.
Once you have imported the backup metadata into a Block Storage database,
restore the volume (see the section called :ref:`volume_backups`).

View File

@ -38,11 +38,11 @@ volume from one to the other. This scenario uses the third migration flow.
First, list the available back-ends:
.. code::
.. code-block:: console
# cinder get-pools
.. code::
.. code-block:: console
+----------+----------------------------------------------------+
| Property | Value |
@ -61,7 +61,7 @@ First, list the available back-ends:
You can also get available back-ends like following:
.. code::
.. code-block:: console
# cinder-manage host list
server1@lvmstorage-1 zone1
@ -73,11 +73,11 @@ But it needs to add pool name in the end. For example,
Next, as the admin user, you can see the current status of the volume
(replace the example ID with your own):
.. code::
.. code-block:: console
$ cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c
.. code::
.. code-block:: console
+--------------------------------+--------------------------------------+
| Property | Value |
@ -125,14 +125,14 @@ Note these attributes:
On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux,
or SUSE Linux Enterprise, run:
.. code::
.. code-block:: console
# service openstack-cinder-volume stop
# chkconfig openstack-cinder-volume off
On nodes that run Ubuntu or Debian, run:
.. code::
.. code-block:: console
# service cinder-volume stop
# chkconfig cinder-volume off
@ -142,7 +142,7 @@ Note these attributes:
Migrate this volume to the second LVM back-end:
.. code::
.. code-block:: console
$ cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c \
server2@lvmstorage-2#lvmstorage-2
@ -153,7 +153,7 @@ migration. While migrating, the ``migstat`` attribute shows states such as
host attribute shows the original ``host``. On success, in this example, the
output looks like:
.. code::
.. code-block:: console
+--------------------------------+--------------------------------------+
| Property | Value |

View File

@ -15,13 +15,12 @@ Enable volume number weigher
To enable a volume number weigher, set the
``scheduler_default_weighers`` to ``VolumeNumberWeigher`` flag in the
:file:`cinder.conf` file to define ``VolumeNumberWeigher``
``cinder.conf`` file to define ``VolumeNumberWeigher``
as the selected weigher.
Configure multiple-storage back ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To configure ``VolumeNumberWeigher``, use ``LVMVolumeDriver``
as the volume driver.
@ -46,13 +45,17 @@ This example configuration defines two back ends:
Volume type
~~~~~~~~~~~
Define a volume type in Block Storage::
Define a volume type in Block Storage:
$ cinder type-create lvm
.. code-block:: console
Create an extra specification that links the volume type to a back-end name::
$ cinder type-create lvm
$ cinder type-key lvm set volume_backend_name=LVM
Create an extra specification that links the volume type to a back-end name:
.. code-block:: console
$ cinder type-key lvm set volume_backend_name=LVM
This example creates a lvm volume type with
``volume_backend_name=LVM`` as extra specifications.
@ -61,14 +64,18 @@ Usage
~~~~~
To create six 1-GB volumes, run the
:command:`cinder create --volume-type lvm 1` command six times::
:command:`cinder create --volume-type lvm 1` command six times:
.. code-block:: console
$ cinder create --volume-type lvm 1
This command creates three volumes in ``stack-volumes`` and
three volumes in ``stack-volumes-1``.
List the available volumes::
List the available volumes:
.. code-block:: console
# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert