Merge "Migrate configuration-reference to Cinder repo"
This commit is contained in:
commit
050a1eda06
27
doc/source/config-reference/block-storage.rst
Normal file
27
doc/source/config-reference/block-storage.rst
Normal file
@ -0,0 +1,27 @@
|
||||
===================================
|
||||
Block Storage Service Configuration
|
||||
===================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
block-storage/block-storage-overview.rst
|
||||
block-storage/volume-drivers.rst
|
||||
block-storage/backup-drivers.rst
|
||||
block-storage/schedulers.rst
|
||||
block-storage/logs.rst
|
||||
block-storage/fc-zoning.rst
|
||||
block-storage/nested-quota.rst
|
||||
block-storage/volume-encryption.rst
|
||||
block-storage/config-options.rst
|
||||
block-storage/samples/index.rst
|
||||
tables/conf-changes/cinder.rst
|
||||
|
||||
.. note::
|
||||
|
||||
The common configurations for shared service and libraries,
|
||||
such as database connections and RPC messaging,
|
||||
are described at :doc:`common-configurations`.
|
||||
|
||||
The Block Storage service works with many different storage
|
||||
drivers that you can configure by using these instructions.
|
24
doc/source/config-reference/block-storage/backup-drivers.rst
Normal file
24
doc/source/config-reference/block-storage/backup-drivers.rst
Normal file
@ -0,0 +1,24 @@
|
||||
==============
|
||||
Backup drivers
|
||||
==============
|
||||
|
||||
.. sort by the drivers by open source software
|
||||
.. and the drivers for proprietary components
|
||||
|
||||
.. toctree::
|
||||
|
||||
backup/ceph-backup-driver.rst
|
||||
backup/glusterfs-backup-driver.rst
|
||||
backup/nfs-backup-driver.rst
|
||||
backup/posix-backup-driver.rst
|
||||
backup/swift-backup-driver.rst
|
||||
backup/gcs-backup-driver.rst
|
||||
backup/tsm-backup-driver.rst
|
||||
|
||||
This section describes how to configure the cinder-backup service and
|
||||
its drivers.
|
||||
|
||||
The volume drivers are included with the `Block Storage repository
|
||||
<https://git.openstack.org/cgit/openstack/cinder/>`_. To set a backup
|
||||
driver, use the ``backup_driver`` flag. By default there is no backup
|
||||
driver enabled.
|
@ -0,0 +1,56 @@
|
||||
==================
|
||||
Ceph backup driver
|
||||
==================
|
||||
|
||||
The Ceph backup driver backs up volumes of any type to a Ceph back-end
|
||||
store. The driver can also detect whether the volume to be backed up is
|
||||
a Ceph RBD volume, and if so, it tries to perform incremental and
|
||||
differential backups.
|
||||
|
||||
For source Ceph RBD volumes, you can perform backups within the same
|
||||
Ceph pool (not recommended). You can also perform backups between
|
||||
different Ceph pools and between different Ceph clusters.
|
||||
|
||||
At the time of writing, differential backup support in Ceph/librbd was
|
||||
quite new. This driver attempts a differential backup in the first
|
||||
instance. If the differential backup fails, the driver falls back to
|
||||
full backup/copy.
|
||||
|
||||
If incremental backups are used, multiple backups of the same volume are
|
||||
stored as snapshots so that minimal space is consumed in the backup
|
||||
store. It takes far less time to restore a volume than to take a full
|
||||
copy.
|
||||
|
||||
.. note::
|
||||
|
||||
Block Storage enables you to:
|
||||
|
||||
- Restore to a new volume, which is the default and recommended
|
||||
action.
|
||||
|
||||
- Restore to the original volume from which the backup was taken.
|
||||
The restore action takes a full copy because this is the safest
|
||||
action.
|
||||
|
||||
To enable the Ceph backup driver, include the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.ceph
|
||||
|
||||
The following configuration options are available for the Ceph backup
|
||||
driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_ceph.rst
|
||||
|
||||
This example shows the default options for the Ceph backup driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_ceph_conf=/etc/ceph/ceph.conf
|
||||
backup_ceph_user = cinder-backup
|
||||
backup_ceph_chunk_size = 134217728
|
||||
backup_ceph_pool = backups
|
||||
backup_ceph_stripe_unit = 0
|
||||
backup_ceph_stripe_count = 0
|
@ -0,0 +1,18 @@
|
||||
=======================================
|
||||
Google Cloud Storage backup driver
|
||||
=======================================
|
||||
|
||||
The Google Cloud Storage (GCS) backup driver backs up volumes of any type to
|
||||
Google Cloud Storage.
|
||||
|
||||
To enable the GCS backup driver, include the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.google
|
||||
|
||||
The following configuration options are available for the GCS backup
|
||||
driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_gcs.rst
|
@ -0,0 +1,17 @@
|
||||
=======================
|
||||
GlusterFS backup driver
|
||||
=======================
|
||||
|
||||
The GlusterFS backup driver backs up volumes of any type to GlusterFS.
|
||||
|
||||
To enable the GlusterFS backup driver, include the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.glusterfs
|
||||
|
||||
The following configuration options are available for the GlusterFS backup
|
||||
driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_glusterfs.rst
|
@ -0,0 +1,18 @@
|
||||
=================
|
||||
NFS backup driver
|
||||
=================
|
||||
|
||||
The backup driver for the NFS back end backs up volumes of any type to
|
||||
an NFS exported backup repository.
|
||||
|
||||
To enable the NFS backup driver, include the following option in the
|
||||
``[DEFAULT]`` section of the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.nfs
|
||||
|
||||
The following configuration options are available for the NFS back-end
|
||||
backup driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_nfs.rst
|
@ -0,0 +1,18 @@
|
||||
================================
|
||||
POSIX file systems backup driver
|
||||
================================
|
||||
|
||||
The POSIX file systems backup driver backs up volumes of any type to
|
||||
POSIX file systems.
|
||||
|
||||
To enable the POSIX file systems backup driver, include the following
|
||||
option in the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.posix
|
||||
|
||||
The following configuration options are available for the POSIX
|
||||
file systems backup driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_posix.rst
|
@ -0,0 +1,52 @@
|
||||
===================
|
||||
Swift backup driver
|
||||
===================
|
||||
|
||||
The backup driver for the swift back end performs a volume backup to an
|
||||
object storage system.
|
||||
|
||||
To enable the swift backup driver, include the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
|
||||
The following configuration options are available for the Swift back-end
|
||||
backup driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_swift.rst
|
||||
|
||||
To enable the swift backup driver for 1.0, 2.0, or 3.0 authentication version,
|
||||
specify ``1``, ``2``, or ``3`` correspondingly. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_swift_auth_version = 2
|
||||
|
||||
In addition, the 2.0 authentication system requires the definition of the
|
||||
``backup_swift_tenant`` setting:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_swift_tenant = <None>
|
||||
|
||||
This example shows the default options for the Swift back-end backup
|
||||
driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_swift_url = http://localhost:8080/v1/AUTH_
|
||||
backup_swift_auth_url = http://localhost:5000/v3
|
||||
backup_swift_auth = per_user
|
||||
backup_swift_auth_version = 1
|
||||
backup_swift_user = <None>
|
||||
backup_swift_user_domain = <None>
|
||||
backup_swift_key = <None>
|
||||
backup_swift_container = volumebackups
|
||||
backup_swift_object_size = 52428800
|
||||
backup_swift_project = <None>
|
||||
backup_swift_project_domain = <None>
|
||||
backup_swift_retry_attempts = 3
|
||||
backup_swift_retry_backoff = 2
|
||||
backup_compression_algorithm = zlib
|
@ -0,0 +1,31 @@
|
||||
========================================
|
||||
IBM Tivoli Storage Manager backup driver
|
||||
========================================
|
||||
|
||||
The IBM Tivoli Storage Manager (TSM) backup driver enables performing
|
||||
volume backups to a TSM server.
|
||||
|
||||
The TSM client should be installed and configured on the machine running
|
||||
the cinder-backup service. See the IBM Tivoli Storage Manager
|
||||
Backup-Archive Client Installation and User's Guide for details on
|
||||
installing the TSM client.
|
||||
|
||||
To enable the IBM TSM backup driver, include the following option in
|
||||
``cinder.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_driver = cinder.backup.drivers.tsm
|
||||
|
||||
The following configuration options are available for the TSM backup
|
||||
driver.
|
||||
|
||||
.. include:: ../../tables/cinder-backups_tsm.rst
|
||||
|
||||
This example shows the default options for the TSM backup driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
backup_tsm_volume_prefix = backup
|
||||
backup_tsm_password = password
|
||||
backup_tsm_compression = True
|
@ -0,0 +1,89 @@
|
||||
=========================================
|
||||
Introduction to the Block Storage service
|
||||
=========================================
|
||||
|
||||
The Block Storage service provides persistent block storage
|
||||
resources that Compute instances can consume. This includes
|
||||
secondary attached storage similar to the Amazon Elastic Block Storage
|
||||
(EBS) offering. In addition, you can write images to a Block Storage
|
||||
device for Compute to use as a bootable persistent instance.
|
||||
|
||||
The Block Storage service differs slightly from the Amazon EBS offering.
|
||||
The Block Storage service does not provide a shared storage solution
|
||||
like NFS. With the Block Storage service, you can attach a device to
|
||||
only one instance.
|
||||
|
||||
The Block Storage service provides:
|
||||
|
||||
- ``cinder-api`` - a WSGI app that authenticates and routes requests
|
||||
throughout the Block Storage service. It supports the OpenStack APIs
|
||||
only, although there is a translation that can be done through
|
||||
Compute's EC2 interface, which calls in to the Block Storage client.
|
||||
|
||||
- ``cinder-scheduler`` - schedules and routes requests to the appropriate
|
||||
volume service. Depending upon your configuration, this may be simple
|
||||
round-robin scheduling to the running volume services, or it can be
|
||||
more sophisticated through the use of the Filter Scheduler. The
|
||||
Filter Scheduler is the default and enables filters on things like
|
||||
Capacity, Availability Zone, Volume Types, and Capabilities as well
|
||||
as custom filters.
|
||||
|
||||
- ``cinder-volume`` - manages Block Storage devices, specifically the
|
||||
back-end devices themselves.
|
||||
|
||||
- ``cinder-backup`` - provides a means to back up a Block Storage volume to
|
||||
OpenStack Object Storage (swift).
|
||||
|
||||
The Block Storage service contains the following components:
|
||||
|
||||
- **Back-end Storage Devices** - the Block Storage service requires some
|
||||
form of back-end storage that the service is built on. The default
|
||||
implementation is to use LVM on a local volume group named
|
||||
"cinder-volumes." In addition to the base driver implementation, the
|
||||
Block Storage service also provides the means to add support for
|
||||
other storage devices to be utilized such as external Raid Arrays or
|
||||
other storage appliances. These back-end storage devices may have
|
||||
custom block sizes when using KVM or QEMU as the hypervisor.
|
||||
|
||||
- **Users and Tenants (Projects)** - the Block Storage service can be
|
||||
used by many different cloud computing consumers or customers
|
||||
(tenants on a shared system), using role-based access assignments.
|
||||
Roles control the actions that a user is allowed to perform. In the
|
||||
default configuration, most actions do not require a particular role,
|
||||
but this can be configured by the system administrator in the
|
||||
appropriate ``policy.json`` file that maintains the rules. A user's
|
||||
access to particular volumes is limited by tenant, but the user name
|
||||
and password are assigned per user. Key pairs granting access to a
|
||||
volume are enabled per user, but quotas to control resource
|
||||
consumption across available hardware resources are per tenant.
|
||||
|
||||
For tenants, quota controls are available to limit:
|
||||
|
||||
- The number of volumes that can be created.
|
||||
|
||||
- The number of snapshots that can be created.
|
||||
|
||||
- The total number of GBs allowed per tenant (shared between
|
||||
snapshots and volumes).
|
||||
|
||||
You can revise the default quota values with the Block Storage CLI,
|
||||
so the limits placed by quotas are editable by admin users.
|
||||
|
||||
- **Volumes, Snapshots, and Backups** - the basic resources offered by
|
||||
the Block Storage service are volumes and snapshots which are derived
|
||||
from volumes and volume backups:
|
||||
|
||||
- **Volumes** - allocated block storage resources that can be
|
||||
attached to instances as secondary storage or they can be used as
|
||||
the root store to boot instances. Volumes are persistent R/W block
|
||||
storage devices most commonly attached to the compute node through
|
||||
iSCSI.
|
||||
|
||||
- **Snapshots** - a read-only point in time copy of a volume. The
|
||||
snapshot can be created from a volume that is currently in use
|
||||
(through the use of ``--force True``) or in an available state.
|
||||
The snapshot can then be used to create a new volume through
|
||||
create from snapshot.
|
||||
|
||||
- **Backups** - an archived copy of a volume currently stored in
|
||||
Object Storage (swift).
|
35
doc/source/config-reference/block-storage/config-options.rst
Normal file
35
doc/source/config-reference/block-storage/config-options.rst
Normal file
@ -0,0 +1,35 @@
|
||||
==================
|
||||
Additional options
|
||||
==================
|
||||
|
||||
These options can also be set in the ``cinder.conf`` file.
|
||||
|
||||
.. include:: ../tables/cinder-api.rst
|
||||
.. include:: ../tables/cinder-auth.rst
|
||||
.. include:: ../tables/cinder-backups.rst
|
||||
.. include:: ../tables/cinder-block-device.rst
|
||||
.. include:: ../tables/cinder-common.rst
|
||||
.. include:: ../tables/cinder-compute.rst
|
||||
.. include:: ../tables/cinder-coordination.rst
|
||||
.. include:: ../tables/cinder-debug.rst
|
||||
.. include:: ../tables/cinder-drbd.rst
|
||||
.. include:: ../tables/cinder-emc.rst
|
||||
.. include:: ../tables/cinder-eternus.rst
|
||||
.. include:: ../tables/cinder-flashsystem.rst
|
||||
.. include:: ../tables/cinder-hgst.rst
|
||||
.. include:: ../tables/cinder-hpelefthand.rst
|
||||
.. include:: ../tables/cinder-hpexp.rst
|
||||
.. include:: ../tables/cinder-huawei.rst
|
||||
.. include:: ../tables/cinder-hyperv.rst
|
||||
.. include:: ../tables/cinder-images.rst
|
||||
.. include:: ../tables/cinder-nas.rst
|
||||
.. include:: ../tables/cinder-profiler.rst
|
||||
.. include:: ../tables/cinder-pure.rst
|
||||
.. include:: ../tables/cinder-quota.rst
|
||||
.. include:: ../tables/cinder-redis.rst
|
||||
.. include:: ../tables/cinder-san.rst
|
||||
.. include:: ../tables/cinder-scheduler.rst
|
||||
.. include:: ../tables/cinder-scst.rst
|
||||
.. include:: ../tables/cinder-storage.rst
|
||||
.. include:: ../tables/cinder-tegile.rst
|
||||
.. include:: ../tables/cinder-zones.rst
|
@ -0,0 +1,244 @@
|
||||
===============
|
||||
Blockbridge EPS
|
||||
===============
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Blockbridge is software that transforms commodity infrastructure into
|
||||
secure multi-tenant storage that operates as a programmable service. It
|
||||
provides automatic encryption, secure deletion, quality of service (QoS),
|
||||
replication, and programmable security capabilities on your choice of
|
||||
hardware. Blockbridge uses micro-segmentation to provide isolation that allows
|
||||
you to concurrently operate OpenStack, Docker, and bare-metal workflows on
|
||||
shared resources. When used with OpenStack, isolated management domains are
|
||||
dynamically created on a per-project basis. All volumes and clones, within and
|
||||
between projects, are automatically cryptographically isolated and implement
|
||||
secure deletion.
|
||||
|
||||
Architecture reference
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
**Blockbridge architecture**
|
||||
|
||||
.. figure:: ../../figures/bb-cinder-fig1.png
|
||||
:width: 100%
|
||||
|
||||
|
||||
Control paths
|
||||
-------------
|
||||
|
||||
The Blockbridge driver is packaged with the core distribution of
|
||||
OpenStack. Operationally, it executes in the context of the Block
|
||||
Storage service. The driver communicates with an OpenStack-specific API
|
||||
provided by the Blockbridge EPS platform. Blockbridge optionally
|
||||
communicates with Identity, Compute, and Block Storage
|
||||
services.
|
||||
|
||||
Block storage API
|
||||
-----------------
|
||||
|
||||
Blockbridge is API driven software-defined storage. The system
|
||||
implements a native HTTP API that is tailored to the specific needs of
|
||||
OpenStack. Each Block Storage service operation maps to a single
|
||||
back-end API request that provides ACID semantics. The API is
|
||||
specifically designed to reduce, if not eliminate, the possibility of
|
||||
inconsistencies between the Block Storage service and external storage
|
||||
infrastructure in the event of hardware, software or data center
|
||||
failure.
|
||||
|
||||
Extended management
|
||||
-------------------
|
||||
|
||||
OpenStack users may utilize Blockbridge interfaces to manage
|
||||
replication, auditing, statistics, and performance information on a
|
||||
per-project and per-volume basis. In addition, they can manage low-level
|
||||
data security functions including verification of data authenticity and
|
||||
encryption key delegation. Native integration with the Identity Service
|
||||
allows tenants to use a single set of credentials. Integration with
|
||||
Block storage and Compute services provides dynamic metadata mapping
|
||||
when using Blockbridge management APIs and tools.
|
||||
|
||||
Attribute-based provisioning
|
||||
----------------------------
|
||||
|
||||
Blockbridge organizes resources using descriptive identifiers called
|
||||
*attributes*. Attributes are assigned by administrators of the
|
||||
infrastructure. They are used to describe the characteristics of storage
|
||||
in an application-friendly way. Applications construct queries that
|
||||
describe storage provisioning constraints and the Blockbridge storage
|
||||
stack assembles the resources as described.
|
||||
|
||||
Any given instance of a Blockbridge volume driver specifies a *query*
|
||||
for resources. For example, a query could specify
|
||||
``'+ssd +10.0.0.0 +6nines -production iops.reserve=1000
|
||||
capacity.reserve=30%'``. This query is satisfied by selecting SSD
|
||||
resources, accessible on the 10.0.0.0 network, with high resiliency, for
|
||||
non-production workloads, with guaranteed IOPS of 1000 and a storage
|
||||
reservation for 30% of the volume capacity specified at create time.
|
||||
Queries and parameters are completely administrator defined: they
|
||||
reflect the layout, resource, and organizational goals of a specific
|
||||
deployment.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, clone, attach, and detach volumes
|
||||
- Create and delete volume snapshots
|
||||
- Create a volume from a snapshot
|
||||
- Copy an image to a volume
|
||||
- Copy a volume to an image
|
||||
- Extend a volume
|
||||
- Get volume statistics
|
||||
|
||||
Supported protocols
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Blockbridge provides iSCSI access to storage. A unique iSCSI data fabric
|
||||
is programmatically assembled when a volume is attached to an instance.
|
||||
A fabric is disassembled when a volume is detached from an instance.
|
||||
Each volume is an isolated SCSI device that supports persistent
|
||||
reservations.
|
||||
|
||||
Configuration steps
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. _cg_create_an_authentication_token:
|
||||
|
||||
Create an authentication token
|
||||
------------------------------
|
||||
|
||||
Whenever possible, avoid using password-based authentication. Even if
|
||||
you have created a role-restricted administrative user via Blockbridge,
|
||||
token-based authentication is preferred. You can generate persistent
|
||||
authentication tokens using the Blockbridge command-line tool as
|
||||
follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ bb -H bb-mn authorization create --notes "OpenStack" --restrict none
|
||||
Authenticating to https://bb-mn/api
|
||||
|
||||
Enter user or access token: system
|
||||
Password for system:
|
||||
Authenticated; token expires in 3599 seconds.
|
||||
|
||||
== Authorization: ATH4762894C40626410
|
||||
notes OpenStack
|
||||
serial ATH4762894C40626410
|
||||
account system (ACT0762594C40626440)
|
||||
user system (USR1B62094C40626440)
|
||||
enabled yes
|
||||
created at 2015-10-24 22:08:48 +0000
|
||||
access type online
|
||||
token suffix xaKUy3gw
|
||||
restrict none
|
||||
|
||||
== Access Token
|
||||
access token 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
|
||||
|
||||
*** Remember to record your access token!
|
||||
|
||||
Create volume type
|
||||
------------------
|
||||
|
||||
Before configuring and enabling the Blockbridge volume driver, register
|
||||
an OpenStack volume type and associate it with a
|
||||
``volume_backend_name``. In this example, a volume type, 'Production',
|
||||
is associated with the ``volume_backend_name`` 'blockbridge\_prod':
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create Production
|
||||
$ openstack volume type set --property volume_backend_name=blockbridge_prod Production
|
||||
|
||||
Specify volume driver
|
||||
---------------------
|
||||
|
||||
Configure the Blockbridge volume driver in ``/etc/cinder/cinder.conf``.
|
||||
Your ``volume_backend_name`` must match the value specified in the
|
||||
:command:`openstack volume type set` command in the previous step.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
|
||||
volume_backend_name = blockbridge_prod
|
||||
|
||||
Specify API endpoint and authentication
|
||||
---------------------------------------
|
||||
|
||||
Configure the API endpoint and authentication. The following example
|
||||
uses an authentication token. You must create your own as described in
|
||||
:ref:`cg_create_an_authentication_token`.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
blockbridge_api_host = [ip or dns of management cluster]
|
||||
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
|
||||
|
||||
Specify resource query
|
||||
----------------------
|
||||
|
||||
By default, a single pool is configured (implied) with a default
|
||||
resource query of ``'+openstack'``. Within Blockbridge, datastore
|
||||
resources that advertise the 'openstack' attribute will be selected to
|
||||
fulfill OpenStack provisioning requests. If you prefer a more specific
|
||||
query, define a custom pool configuration.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
blockbridge_pools = Production: +production +qos iops.reserve=5000
|
||||
|
||||
Pools support storage systems that offer multiple classes of service.
|
||||
You may wish to configure multiple pools to implement more sophisticated
|
||||
scheduling capabilities.
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: ../../tables/cinder-blockbridge.rst
|
||||
|
||||
.. _cg_configuration_example:
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``cinder.conf`` example file
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Default]
|
||||
enabled_backends = bb_devel bb_prod
|
||||
|
||||
[bb_prod]
|
||||
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
|
||||
volume_backend_name = blockbridge_prod
|
||||
blockbridge_api_host = [ip or dns of management cluster]
|
||||
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
|
||||
blockbridge_pools = Production: +production +qos iops.reserve=5000
|
||||
|
||||
[bb_devel]
|
||||
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
|
||||
volume_backend_name = blockbridge_devel
|
||||
blockbridge_api_host = [ip or dns of management cluster]
|
||||
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
|
||||
blockbridge_pools = Development: +development
|
||||
|
||||
Multiple volume types
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Volume *types* are exposed to tenants, *pools* are not. To offer
|
||||
multiple classes of storage to OpenStack tenants, you should define
|
||||
multiple volume types. Simply repeat the process above for each desired
|
||||
type. Be sure to specify a unique ``volume_backend_name`` and pool
|
||||
configuration for each type. The
|
||||
:ref:`cinder.conf <cg_configuration_example>` example included with
|
||||
this documentation illustrates configuration of multiple types.
|
||||
|
||||
Testing resources
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Blockbridge is freely available for testing purposes and deploys in
|
||||
seconds as a Docker container. This is the same container used to run
|
||||
continuous integration for OpenStack. For more information visit
|
||||
`www.blockbridge.io <http://www.blockbridge.io>`__.
|
@ -0,0 +1,109 @@
|
||||
=============================
|
||||
Ceph RADOS Block Device (RBD)
|
||||
=============================
|
||||
|
||||
If you use KVM or QEMU as your hypervisor, you can configure the Compute
|
||||
service to use `Ceph RADOS block devices
|
||||
(RBD) <http://ceph.com/ceph-storage/block-storage/>`__ for volumes.
|
||||
|
||||
Ceph is a massively scalable, open source, distributed storage system.
|
||||
It is comprised of an object store, block store, and a POSIX-compliant
|
||||
distributed file system. The platform can auto-scale to the exabyte
|
||||
level and beyond. It runs on commodity hardware, is self-healing and
|
||||
self-managing, and has no single point of failure. Ceph is in the Linux
|
||||
kernel and is integrated with the OpenStack cloud operating system. Due
|
||||
to its open-source nature, you can install and use this portable storage
|
||||
platform in public or private clouds.
|
||||
|
||||
.. figure:: ../../figures/ceph-architecture.png
|
||||
|
||||
Ceph architecture
|
||||
|
||||
RADOS
|
||||
~~~~~
|
||||
|
||||
Ceph is based on Reliable Autonomic Distributed Object Store (RADOS).
|
||||
RADOS distributes objects across the storage cluster and replicates
|
||||
objects for fault tolerance. RADOS contains the following major
|
||||
components:
|
||||
|
||||
*Object Storage Device (OSD) Daemon*
|
||||
The storage daemon for the RADOS service, which interacts with the
|
||||
OSD (physical or logical storage unit for your data).
|
||||
You must run this daemon on each server in your cluster. For each
|
||||
OSD, you can have an associated hard drive disk. For performance
|
||||
purposes, pool your hard drive disk with raid arrays, logical volume
|
||||
management (LVM), or B-tree file system (Btrfs) pooling. By default,
|
||||
the following pools are created: data, metadata, and RBD.
|
||||
|
||||
*Meta-Data Server (MDS)*
|
||||
Stores metadata. MDSs build a POSIX file
|
||||
system on top of objects for Ceph clients. However, if you do not use
|
||||
the Ceph file system, you do not need a metadata server.
|
||||
|
||||
*Monitor (MON)*
|
||||
A lightweight daemon that handles all communications
|
||||
with external applications and clients. It also provides a consensus
|
||||
for distributed decision making in a Ceph/RADOS cluster. For
|
||||
instance, when you mount a Ceph shared on a client, you point to the
|
||||
address of a MON server. It checks the state and the consistency of
|
||||
the data. In an ideal setup, you must run at least three ``ceph-mon``
|
||||
daemons on separate servers.
|
||||
|
||||
Ceph developers recommend XFS for production deployments, Btrfs for
|
||||
testing, development, and any non-critical deployments. Btrfs has the
|
||||
correct feature set and roadmap to serve Ceph in the long-term, but XFS
|
||||
and ext4 provide the necessary stability for today’s deployments.
|
||||
|
||||
.. note::
|
||||
|
||||
If using Btrfs, ensure that you use the correct version (see `Ceph
|
||||
Dependencies <http://ceph.com/docs/master/start/os-recommendations/.>`__).
|
||||
|
||||
For more information about usable file systems, see
|
||||
`ceph.com/ceph-storage/file-system/ <http://ceph.com/ceph-storage/file-system/>`__.
|
||||
|
||||
Ways to store, use, and expose data
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To store and access your data, you can use the following storage
|
||||
systems:
|
||||
|
||||
*RADOS*
|
||||
Use as an object, default storage mechanism.
|
||||
|
||||
*RBD*
|
||||
Use as a block device. The Linux kernel RBD (RADOS block
|
||||
device) driver allows striping a Linux block device over multiple
|
||||
distributed object store data objects. It is compatible with the KVM
|
||||
RBD image.
|
||||
|
||||
*CephFS*
|
||||
Use as a file, POSIX-compliant file system.
|
||||
|
||||
Ceph exposes RADOS; you can access it through the following interfaces:
|
||||
|
||||
*RADOS Gateway*
|
||||
OpenStack Object Storage and Amazon-S3 compatible
|
||||
RESTful interface (see `RADOS_Gateway
|
||||
<http://ceph.com/wiki/RADOS_Gateway>`__).
|
||||
|
||||
*librados*
|
||||
and its related C/C++ bindings
|
||||
|
||||
*RBD and QEMU-RBD*
|
||||
Linux kernel and QEMU block devices that stripe
|
||||
data across multiple objects.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Ceph RADOS Block Device driver.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``volume_tmp_dir`` option has been deprecated and replaced by
|
||||
``image_conversion_dir``.
|
||||
|
||||
.. include:: ../../tables/cinder-storage_ceph.rst
|
@ -0,0 +1,8 @@
|
||||
=======================
|
||||
CloudByte volume driver
|
||||
=======================
|
||||
|
||||
CloudByte Block Storage driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: ../../tables/cinder-cloudbyte.rst
|
@ -0,0 +1,93 @@
|
||||
=======================
|
||||
Coho Data volume driver
|
||||
=======================
|
||||
|
||||
The Coho DataStream Scale-Out Storage allows your Block Storage service to
|
||||
scale seamlessly. The architecture consists of commodity storage servers
|
||||
with SDN ToR switches. Leveraging an SDN OpenFlow controller allows you
|
||||
to scale storage horizontally, while avoiding storage and network bottlenecks
|
||||
by intelligent load-balancing and parallelized workloads. High-performance
|
||||
PCIe NVMe flash, paired with traditional hard disk drives (HDD) or solid-state
|
||||
drives (SSD), delivers low-latency performance even with highly mixed workloads
|
||||
in large scale environment.
|
||||
|
||||
Coho Data's storage features include real-time instance level
|
||||
granularity performance and capacity reporting via API or UI, and
|
||||
single-IP storage endpoint access.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, detach, retype, clone, and extend volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy a volume to an image.
|
||||
* Copy an image to a volume.
|
||||
* Create a thin provisioned volume.
|
||||
* Get volume statistics.
|
||||
|
||||
Coho Data QoS support
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
QoS support for the Coho Data driver includes the ability to set the
|
||||
following capabilities in the OpenStack Block Storage API
|
||||
``cinder.api.contrib.qos_specs_manage`` QoS specs extension module:
|
||||
|
||||
* **maxIOPS** - The maximum number of IOPS allowed for this volume.
|
||||
|
||||
* **maxMBS** - The maximum throughput allowed for this volume.
|
||||
|
||||
The QoS keys above must be created and associated with a volume type.
|
||||
For information about how to set the key-value pairs and associate
|
||||
them with a volume type, see the `volume qos
|
||||
<https://docs.openstack.org/developer/python-openstackclient/command-objects/volume-qos.html>`_
|
||||
section in the OpenStackClient command list.
|
||||
|
||||
.. note::
|
||||
|
||||
If you change a volume type with QoS to a new volume type
|
||||
without QoS, the QoS configuration settings will be removed.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* NFS client on the Block storage controller.
|
||||
|
||||
Coho Data Block Storage driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Create cinder volume type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create coho-1
|
||||
|
||||
#. Edit the OpenStack Block Storage service configuration file.
|
||||
The following sample, ``/etc/cinder/cinder.conf``, configuration lists the
|
||||
relevant settings for a typical Block Storage service using a single
|
||||
Coho Data storage:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = coho-1
|
||||
default_volume_type = coho-1
|
||||
|
||||
[coho-1]
|
||||
volume_driver = cinder.volume.drivers.coho.CohoDriver
|
||||
volume_backend_name = coho-1
|
||||
nfs_shares_config = /etc/cinder/coho_shares
|
||||
nas_secure_file_operations = 'false'
|
||||
|
||||
#. Add your list of Coho Datastream NFS addresses to the file you specified
|
||||
with the ``nfs_shares_config`` option. For example, if the value of this
|
||||
option was set to ``/etc/cinder/coho_shares``, then:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cat /etc/cinder/coho_shares
|
||||
<coho-nfs-ip>:/<export-path>
|
||||
|
||||
#. Restart the ``cinder-volume`` service to enable Coho Data driver.
|
||||
|
||||
.. include:: ../../tables/cinder-coho.rst
|
@ -0,0 +1,318 @@
|
||||
=====================================
|
||||
CoprHD FC, iSCSI, and ScaleIO drivers
|
||||
=====================================
|
||||
|
||||
CoprHD is an open source software-defined storage controller and API platform.
|
||||
It enables policy-based management and cloud automation of storage resources
|
||||
for block, object and file storage providers.
|
||||
For more details, see `CoprHD <http://coprhd.org/>`_.
|
||||
|
||||
EMC ViPR Controller is the commercial offering of CoprHD. These same volume
|
||||
drivers can also be considered as EMC ViPR Controller Block Storage drivers.
|
||||
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
CoprHD version 3.0 is required. Refer to the CoprHD documentation for
|
||||
installation and configuration instructions.
|
||||
|
||||
If you are using these drivers to integrate with EMC ViPR Controller, use
|
||||
EMC ViPR Controller 3.0.
|
||||
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following operations are supported:
|
||||
|
||||
- Create, delete, attach, detach, retype, clone, and extend volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy a volume to an image.
|
||||
- Copy an image to a volume.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Retype a volume.
|
||||
- Get volume statistics.
|
||||
- Create, delete, and update consistency groups.
|
||||
- Create and delete consistency group snapshots.
|
||||
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options specific to the
|
||||
CoprHD volume driver.
|
||||
|
||||
.. include:: ../../tables/cinder-coprhd.rst
|
||||
|
||||
|
||||
Preparation
|
||||
~~~~~~~~~~~
|
||||
|
||||
This involves setting up the CoprHD environment first and then configuring
|
||||
the CoprHD Block Storage driver.
|
||||
|
||||
CoprHD
|
||||
------
|
||||
|
||||
The CoprHD environment must meet specific configuration requirements to
|
||||
support the OpenStack Block Storage driver.
|
||||
|
||||
- CoprHD users must be assigned a Tenant Administrator role or a Project
|
||||
Administrator role for the Project being used. CoprHD roles are configured
|
||||
by CoprHD Security Administrators. Consult the CoprHD documentation for
|
||||
details.
|
||||
|
||||
- A CorprHD system administrator must execute the following configurations
|
||||
using the CoprHD UI, CoprHD API, or CoprHD CLI:
|
||||
|
||||
- Create CoprHD virtual array
|
||||
- Create CoprHD virtual storage pool
|
||||
- Virtual Array designated for iSCSI driver must have an IP network created
|
||||
with appropriate IP storage ports
|
||||
- Designated tenant for use
|
||||
- Designated project for use
|
||||
|
||||
.. note:: Use each back end to manage one virtual array and one virtual
|
||||
storage pool. However, the user can have multiple instances of
|
||||
CoprHD Block Storage driver, sharing the same virtual array and virtual
|
||||
storage pool.
|
||||
|
||||
- A typical CoprHD virtual storage pool will have the following values
|
||||
specified:
|
||||
|
||||
- Storage Type: Block
|
||||
- Provisioning Type: Thin
|
||||
- Protocol: iSCSI/Fibre Channel(FC)/ScaleIO
|
||||
- Multi-Volume Consistency: DISABLED OR ENABLED
|
||||
- Maximum Native Snapshots: A value greater than 0 allows the OpenStack user
|
||||
to take Snapshots
|
||||
|
||||
|
||||
CoprHD drivers - Single back end
|
||||
--------------------------------
|
||||
|
||||
**cinder.conf**
|
||||
|
||||
#. Modify ``/etc/cinder/cinder.conf`` by adding the following lines,
|
||||
substituting values for your environment:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[coprhd-iscsi]
|
||||
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
|
||||
volume_backend_name = coprhd-iscsi
|
||||
coprhd_hostname = <CoprHD-Host-Name>
|
||||
coprhd_port = 4443
|
||||
coprhd_username = <username>
|
||||
coprhd_password = <password>
|
||||
coprhd_tenant = <CoprHD-Tenant-Name>
|
||||
coprhd_project = <CoprHD-Project-Name>
|
||||
coprhd_varray = <CoprHD-Virtual-Array-Name>
|
||||
coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage
|
||||
|
||||
#. If you use the ScaleIO back end, add the following lines:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
coprhd_scaleio_rest_gateway_host = <IP or FQDN>
|
||||
coprhd_scaleio_rest_gateway_port = 443
|
||||
coprhd_scaleio_rest_server_username = <username>
|
||||
coprhd_scaleio_rest_server_password = <password>
|
||||
scaleio_verify_server_certificate = True or False
|
||||
scaleio_server_certificate_path = <path-of-certificate-for-validation>
|
||||
|
||||
#. Specify the driver using the ``enabled_backends`` parameter::
|
||||
|
||||
enabled_backends = coprhd-iscsi
|
||||
|
||||
.. note:: To utilize the Fibre Channel driver, replace the
|
||||
``volume_driver`` line above with::
|
||||
|
||||
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
|
||||
|
||||
.. note:: To utilize the ScaleIO driver, replace the ``volume_driver`` line
|
||||
above with::
|
||||
|
||||
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver
|
||||
|
||||
.. note:: Set ``coprhd_emulate_snapshot`` to True if the CoprHD vpool has
|
||||
VMAX or VPLEX as the back-end storage. For these type of back-end
|
||||
storages, when a user tries to create a snapshot, an actual volume
|
||||
gets created in the back end.
|
||||
|
||||
#. Modify the ``rpc_response_timeout`` value in ``/etc/cinder/cinder.conf`` to
|
||||
at least 5 minutes. If this entry does not already exist within the
|
||||
``cinder.conf`` file, add it in the ``[DEFAULT]`` section:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
rpc_response_timeout = 300
|
||||
|
||||
#. Now, restart the ``cinder-volume`` service.
|
||||
|
||||
**Volume type creation and extra specs**
|
||||
|
||||
#. Create OpenStack volume types:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create <typename>
|
||||
|
||||
#. Map the OpenStack volume type to the CoprHD virtual pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type set <typename> --property CoprHD:VPOOL=<CoprHD-PoolName>
|
||||
|
||||
#. Map the volume type created to appropriate back-end driver:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type set <typename> --property volume_backend_name=<VOLUME_BACKEND_DRIVER>
|
||||
|
||||
|
||||
CoprHD drivers - Multiple back-ends
|
||||
-----------------------------------
|
||||
|
||||
**cinder.conf**
|
||||
|
||||
#. Add or modify the following entries if you are planning to use multiple
|
||||
back-end drivers:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio
|
||||
|
||||
#. Add the following at the end of the file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[coprhddriver-iscsi]
|
||||
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
|
||||
volume_backend_name = EMCCoprHDISCSIDriver
|
||||
coprhd_hostname = <CoprHD Host Name>
|
||||
coprhd_port = 4443
|
||||
coprhd_username = <username>
|
||||
coprhd_password = <password>
|
||||
coprhd_tenant = <CoprHD-Tenant-Name>
|
||||
coprhd_project = <CoprHD-Project-Name>
|
||||
coprhd_varray = <CoprHD-Virtual-Array-Name>
|
||||
|
||||
|
||||
[coprhddriver-fc]
|
||||
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
|
||||
volume_backend_name = EMCCoprHDFCDriver
|
||||
coprhd_hostname = <CoprHD Host Name>
|
||||
coprhd_port = 4443
|
||||
coprhd_username = <username>
|
||||
coprhd_password = <password>
|
||||
coprhd_tenant = <CoprHD-Tenant-Name>
|
||||
coprhd_project = <CoprHD-Project-Name>
|
||||
coprhd_varray = <CoprHD-Virtual-Array-Name>
|
||||
|
||||
|
||||
[coprhddriver-scaleio]
|
||||
volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver
|
||||
volume_backend_name = EMCCoprHDScaleIODriver
|
||||
coprhd_hostname = <CoprHD Host Name>
|
||||
coprhd_port = 4443
|
||||
coprhd_username = <username>
|
||||
coprhd_password = <password>
|
||||
coprhd_tenant = <CoprHD-Tenant-Name>
|
||||
coprhd_project = <CoprHD-Project-Name>
|
||||
coprhd_varray = <CoprHD-Virtual-Array-Name>
|
||||
coprhd_scaleio_rest_gateway_host = <ScaleIO Rest Gateway>
|
||||
coprhd_scaleio_rest_gateway_port = 443
|
||||
coprhd_scaleio_rest_server_username = <rest gateway username>
|
||||
coprhd_scaleio_rest_server_password = <rest gateway password>
|
||||
scaleio_verify_server_certificate = True or False
|
||||
scaleio_server_certificate_path = <certificate path>
|
||||
|
||||
|
||||
#. Restart the ``cinder-volume`` service.
|
||||
|
||||
|
||||
**Volume type creation and extra specs**
|
||||
|
||||
Setup the ``volume-types`` and ``volume-type`` to ``volume-backend``
|
||||
association:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "CoprHD High Performance ISCSI"
|
||||
$ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI"
|
||||
$ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver
|
||||
|
||||
$ openstack volume type create "CoprHD High Performance FC"
|
||||
$ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC"
|
||||
$ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver
|
||||
|
||||
$ openstack volume type create "CoprHD performance SIO"
|
||||
$ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf"
|
||||
$ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver
|
||||
|
||||
|
||||
ISCSI driver notes
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* The compute host must be added to the CoprHD along with its ISCSI
|
||||
initiator.
|
||||
* The ISCSI initiator must be associated with IP network on the CoprHD.
|
||||
|
||||
|
||||
FC driver notes
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
* The compute host must be attached to a VSAN or fabric discovered
|
||||
by CoprHD.
|
||||
* There is no need to perform any SAN zoning operations. CoprHD will perform
|
||||
the necessary operations automatically as part of the provisioning process.
|
||||
|
||||
|
||||
ScaleIO driver notes
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Install the ScaleIO SDC on the compute host.
|
||||
* The compute host must be added as the SDC to the ScaleIO MDS
|
||||
using the below commands::
|
||||
|
||||
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs
|
||||
(starting with primary MDM and separated by comma)
|
||||
Example:
|
||||
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip
|
||||
10.247.78.45,10.247.78.46,10.247.78.47
|
||||
|
||||
This step has to be repeated whenever the SDC (compute host in this case)
|
||||
is rebooted.
|
||||
|
||||
|
||||
Consistency group configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable the support of consistency group and consistency group snapshot
|
||||
operations, use a text editor to edit the file ``/etc/cinder/policy.json`` and
|
||||
change the values of the below fields as specified. Upon editing the file,
|
||||
restart the ``c-api`` service::
|
||||
|
||||
"consistencygroup:create" : "",
|
||||
"consistencygroup:delete": "",
|
||||
"consistencygroup:get": "",
|
||||
"consistencygroup:get_all": "",
|
||||
"consistencygroup:update": "",
|
||||
"consistencygroup:create_cgsnapshot" : "",
|
||||
"consistencygroup:delete_cgsnapshot": "",
|
||||
"consistencygroup:get_cgsnapshot": "",
|
||||
"consistencygroup:get_all_cgsnapshots": "",
|
||||
|
||||
|
||||
Names of resources in back-end storage
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All the resources like volume, consistency group, snapshot, and consistency
|
||||
group snapshot will use the display name in OpenStack for naming in the
|
||||
back-end storage.
|
@ -0,0 +1,170 @@
|
||||
==============
|
||||
Datera drivers
|
||||
==============
|
||||
|
||||
Datera iSCSI driver
|
||||
-------------------
|
||||
|
||||
The Datera Elastic Data Fabric (EDF) is a scale-out storage software that
|
||||
turns standard, commodity hardware into a RESTful API-driven, intent-based
|
||||
policy controlled storage fabric for large-scale clouds. The Datera EDF
|
||||
integrates seamlessly with the Block Storage service. It provides storage
|
||||
through the iSCSI block protocol framework over the iSCSI block protocol.
|
||||
Datera supports all of the Block Storage services.
|
||||
|
||||
System requirements, prerequisites, and recommendations
|
||||
-------------------------------------------------------
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
* Must be running compatible versions of OpenStack and Datera EDF.
|
||||
Please visit `here <https://github.com/Datera/cinder>`_ to determine the
|
||||
correct version.
|
||||
|
||||
* All nodes must have access to Datera EDF through the iSCSI block protocol.
|
||||
|
||||
* All nodes accessing the Datera EDF must have the following packages
|
||||
installed:
|
||||
|
||||
* Linux I/O (LIO)
|
||||
* open-iscsi
|
||||
* open-iscsi-utils
|
||||
* wget
|
||||
|
||||
.. include:: ../../tables/cinder-datera.rst
|
||||
|
||||
|
||||
|
||||
Configuring the Datera volume driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Modify the ``/etc/cinder/cinder.conf`` file for Block Storage service.
|
||||
|
||||
* Enable the Datera volume driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = datera
|
||||
# ...
|
||||
|
||||
* Optional. Designate Datera as the default back-end:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_volume_type = datera
|
||||
|
||||
* Create a new section for the Datera back-end definition. The ``san_ip`` can
|
||||
be either the Datera Management Network VIP or one of the Datera iSCSI
|
||||
Access Network VIPs depending on the network segregation requirements:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.datera.DateraDriver
|
||||
san_ip = <IP_ADDR> # The OOB Management IP of the cluster
|
||||
san_login = admin # Your cluster admin login
|
||||
san_password = password # Your cluster admin password
|
||||
san_is_local = true
|
||||
datera_num_replicas = 3 # Number of replicas to use for volume
|
||||
|
||||
Enable the Datera volume driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Verify the OpenStack control node can reach the Datera ``san_ip``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ping -c 4 <san_IP>
|
||||
|
||||
* Start the Block Storage service on all nodes running the ``cinder-volume``
|
||||
services:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ service cinder-volume restart
|
||||
|
||||
QoS support for the Datera drivers includes the ability to set the
|
||||
following capabilities in QoS Specs
|
||||
|
||||
* **read_iops_max** -- must be positive integer
|
||||
|
||||
* **write_iops_max** -- must be positive integer
|
||||
|
||||
* **total_iops_max** -- must be positive integer
|
||||
|
||||
* **read_bandwidth_max** -- in KB per second, must be positive integer
|
||||
|
||||
* **write_bandwidth_max** -- in KB per second, must be positive integer
|
||||
|
||||
* **total_bandwidth_max** -- in KB per second, must be positive integer
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create qos spec
|
||||
$ openstack volume qos create --property total_iops_max=1000 total_bandwidth_max=2000 DateraBronze
|
||||
|
||||
# Associate qos-spec with volume type
|
||||
$ openstack volume qos associate DateraBronze VOLUME_TYPE
|
||||
|
||||
# Add additional qos values or update existing ones
|
||||
$ openstack volume qos set --property read_bandwidth_max=500 DateraBronze
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, detach, manage, unmanage, and list volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Support for naming convention changes.
|
||||
|
||||
Configuring multipathing
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following configuration is for 3.X Linux kernels, some parameters in
|
||||
different Linux distributions may be different. Make the following changes
|
||||
in the ``multipath.conf`` file:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
defaults {
|
||||
checker_timer 5
|
||||
}
|
||||
devices {
|
||||
device {
|
||||
vendor "DATERA"
|
||||
product "IBLOCK"
|
||||
getuid_callout "/lib/udev/scsi_id --whitelisted –
|
||||
replace-whitespace --page=0x80 --device=/dev/%n"
|
||||
path_grouping_policy group_by_prio
|
||||
path_checker tur
|
||||
prio alua
|
||||
path_selector "queue-length 0"
|
||||
hardware_handler "1 alua"
|
||||
failback 5
|
||||
}
|
||||
}
|
||||
blacklist {
|
||||
device {
|
||||
vendor ".*"
|
||||
product ".*"
|
||||
}
|
||||
}
|
||||
blacklist_exceptions {
|
||||
device {
|
||||
vendor "DATERA.*"
|
||||
product "IBLOCK.*"
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,319 @@
|
||||
=====================================
|
||||
Dell EMC ScaleIO Block Storage driver
|
||||
=====================================
|
||||
|
||||
ScaleIO is a software-only solution that uses existing servers' local
|
||||
disks and LAN to create a virtual SAN that has all of the benefits of
|
||||
external storage, but at a fraction of the cost and complexity. Using the
|
||||
driver, Block Storage hosts can connect to a ScaleIO Storage
|
||||
cluster.
|
||||
|
||||
This section explains how to configure and connect the block storage
|
||||
nodes to a ScaleIO storage cluster.
|
||||
|
||||
Support matrix
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
.. list-table::
|
||||
:widths: 10 25
|
||||
:header-rows: 1
|
||||
|
||||
* - ScaleIO version
|
||||
- Supported Linux operating systems
|
||||
* - 2.0
|
||||
- CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12, Ubuntu 14.04, Ubuntu 16.04
|
||||
|
||||
Deployment prerequisites
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* ScaleIO Gateway must be installed and accessible in the network.
|
||||
For installation steps, refer to the Preparing the installation Manager
|
||||
and the Gateway section in ScaleIO Deployment Guide. See
|
||||
:ref:`scale_io_docs`.
|
||||
|
||||
* ScaleIO Data Client (SDC) must be installed on all OpenStack nodes.
|
||||
|
||||
.. note:: Ubuntu users must follow the specific instructions in the ScaleIO
|
||||
deployment guide for Ubuntu environments. See the Deploying on
|
||||
Ubuntu servers section in ScaleIO Deployment Guide. See
|
||||
:ref:`scale_io_docs`.
|
||||
|
||||
.. _scale_io_docs:
|
||||
|
||||
Official documentation
|
||||
----------------------
|
||||
|
||||
To find the ScaleIO documentation:
|
||||
|
||||
#. Go to the `ScaleIO product documentation page <https://support.emc.com/products/33925_ScaleIO/Documentation/?source=promotion>`_.
|
||||
|
||||
#. From the left-side panel, select the relevant version.
|
||||
|
||||
#. Search for "ScaleIO 2.0 Deployment Guide".
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, clone, attach, detach, manage, and unmanage volumes
|
||||
|
||||
* Create, delete, manage, and unmanage volume snapshots
|
||||
|
||||
* Create a volume from a snapshot
|
||||
|
||||
* Copy an image to a volume
|
||||
|
||||
* Copy a volume to an image
|
||||
|
||||
* Extend a volume
|
||||
|
||||
* Get volume statistics
|
||||
|
||||
* Create, list, update, and delete consistency groups
|
||||
|
||||
* Create, list, update, and delete consistency group snapshots
|
||||
|
||||
ScaleIO QoS support
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
QoS support for the ScaleIO driver includes the ability to set the
|
||||
following capabilities in the Block Storage API
|
||||
``cinder.api.contrib.qos_specs_manage`` QoS specs extension module:
|
||||
|
||||
* ``maxIOPS``
|
||||
|
||||
* ``maxIOPSperGB``
|
||||
|
||||
* ``maxBWS``
|
||||
|
||||
* ``maxBWSperGB``
|
||||
|
||||
The QoS keys above must be created and associated with a volume type.
|
||||
For information about how to set the key-value pairs and associate
|
||||
them with a volume type, run the following commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help volume qos
|
||||
|
||||
``maxIOPS``
|
||||
The QoS I/O rate limit. If not set, the I/O rate will be unlimited.
|
||||
The setting must be larger than 10.
|
||||
|
||||
``maxIOPSperGB``
|
||||
The QoS I/O rate limit.
|
||||
The limit will be calculated by the specified value multiplied by
|
||||
the volume size.
|
||||
The setting must be larger than 10.
|
||||
|
||||
``maxBWS``
|
||||
The QoS I/O bandwidth rate limit in KBs. If not set, the I/O
|
||||
bandwidth rate will be unlimited. The setting must be a multiple of 1024.
|
||||
|
||||
``maxBWSperGB``
|
||||
The QoS I/O bandwidth rate limit in KBs.
|
||||
The limit will be calculated by the specified value multiplied by
|
||||
the volume size.
|
||||
The setting must be a multiple of 1024.
|
||||
|
||||
The driver always chooses the minimum between the QoS keys value
|
||||
and the relevant calculated value of ``maxIOPSperGB`` or ``maxBWSperGB``.
|
||||
|
||||
Since the limits are per SDC, they will be applied after the volume
|
||||
is attached to an instance, and thus to a compute node/SDC.
|
||||
|
||||
ScaleIO thin provisioning support
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Block Storage driver supports creation of thin-provisioned and
|
||||
thick-provisioned volumes.
|
||||
The provisioning type settings can be added as an extra specification
|
||||
of the volume type, as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
provisioning:type = thin\thick
|
||||
|
||||
The old specification: ``sio:provisioning_type`` is deprecated.
|
||||
|
||||
Oversubscription
|
||||
----------------
|
||||
|
||||
Configure the oversubscription ratio by adding the following parameter
|
||||
under the separate section for ScaleIO:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sio_max_over_subscription_ratio = OVER_SUBSCRIPTION_RATIO
|
||||
|
||||
.. note::
|
||||
|
||||
The default value for ``sio_max_over_subscription_ratio``
|
||||
is 10.0.
|
||||
|
||||
Oversubscription is calculated correctly by the Block Storage service
|
||||
only if the extra specification ``provisioning:type``
|
||||
appears in the volume type regardless to the default provisioning type.
|
||||
Maximum oversubscription value supported for ScaleIO is 10.0.
|
||||
|
||||
Default provisioning type
|
||||
-------------------------
|
||||
|
||||
If provisioning type settings are not specified in the volume type,
|
||||
the default value is set according to the ``san_thin_provision``
|
||||
option in the configuration file. The default provisioning type
|
||||
will be ``thin`` if the option is not specified in the configuration
|
||||
file. To set the default provisioning type ``thick``, set
|
||||
the ``san_thin_provision`` option to ``false``
|
||||
in the configuration file, as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_thin_provision = false
|
||||
|
||||
The configuration file is usually located in
|
||||
``/etc/cinder/cinder.conf``.
|
||||
For a configuration example, see:
|
||||
:ref:`cinder.conf <cg_configuration_example_emc>`.
|
||||
|
||||
ScaleIO Block Storage driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Edit the ``cinder.conf`` file by adding the configuration below under
|
||||
a new section (for example, ``[scaleio]``) and change the ``enable_backends``
|
||||
setting (in the ``[DEFAULT]`` section) to include this new back end.
|
||||
The configuration file is usually located at
|
||||
``/etc/cinder/cinder.conf``.
|
||||
|
||||
For a configuration example, refer to the example
|
||||
:ref:`cinder.conf <cg_configuration_example_emc>` .
|
||||
|
||||
ScaleIO driver name
|
||||
-------------------
|
||||
|
||||
Configure the driver name by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver
|
||||
|
||||
ScaleIO MDM server IP
|
||||
---------------------
|
||||
|
||||
The ScaleIO Meta Data Manager monitors and maintains the available
|
||||
resources and permissions.
|
||||
|
||||
To retrieve the MDM server IP address, use the :command:`drv_cfg --query_mdms`
|
||||
command.
|
||||
|
||||
Configure the MDM server IP address by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_ip = ScaleIO GATEWAY IP
|
||||
|
||||
ScaleIO Protection Domain name
|
||||
------------------------------
|
||||
|
||||
ScaleIO allows multiple Protection Domains (groups of SDSs that provide
|
||||
backup for each other).
|
||||
|
||||
To retrieve the available Protection Domains, use the command
|
||||
:command:`scli --query_all` and search for the Protection
|
||||
Domains section.
|
||||
|
||||
Configure the Protection Domain for newly created volumes by adding the
|
||||
following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sio_protection_domain_name = ScaleIO Protection Domain
|
||||
|
||||
ScaleIO Storage Pool name
|
||||
-------------------------
|
||||
|
||||
A ScaleIO Storage Pool is a set of physical devices in a Protection
|
||||
Domain.
|
||||
|
||||
To retrieve the available Storage Pools, use the command
|
||||
:command:`scli --query_all` and search for available Storage Pools.
|
||||
|
||||
Configure the Storage Pool for newly created volumes by adding the
|
||||
following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sio_storage_pool_name = ScaleIO Storage Pool
|
||||
|
||||
ScaleIO Storage Pools
|
||||
---------------------
|
||||
|
||||
Multiple Storage Pools and Protection Domains can be listed for use by
|
||||
the virtual machines.
|
||||
|
||||
To retrieve the available Storage Pools, use the command
|
||||
:command:`scli --query_all` and search for available Storage Pools.
|
||||
|
||||
Configure the available Storage Pools by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sio_storage_pools = Comma-separated list of protection domain:storage pool name
|
||||
|
||||
ScaleIO user credentials
|
||||
------------------------
|
||||
|
||||
Block Storage requires a ScaleIO user with administrative
|
||||
privileges. ScaleIO recommends creating a dedicated OpenStack user
|
||||
account that has an administrative user role.
|
||||
|
||||
Refer to the ScaleIO User Guide for details on user account management.
|
||||
|
||||
Configure the user credentials by adding the following parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_login = ScaleIO username
|
||||
|
||||
san_password = ScaleIO password
|
||||
|
||||
Multiple back ends
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configuring multiple storage back ends allows you to create several back-end
|
||||
storage solutions that serve the same Compute resources.
|
||||
|
||||
When a volume is created, the scheduler selects the appropriate back end
|
||||
to handle the request, according to the specified volume type.
|
||||
|
||||
.. _cg_configuration_example_emc:
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
**cinder.conf example file**
|
||||
|
||||
You can update the ``cinder.conf`` file by editing the necessary
|
||||
parameters as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Default]
|
||||
enabled_backends = scaleio
|
||||
|
||||
[scaleio]
|
||||
volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver
|
||||
volume_backend_name = scaleio
|
||||
san_ip = GATEWAY_IP
|
||||
sio_protection_domain_name = Default_domain
|
||||
sio_storage_pool_name = Default_pool
|
||||
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
|
||||
san_login = SIO_USER
|
||||
san_password = SIO_PASSWD
|
||||
san_thin_provision = false
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ScaleIO driver supports these configuration options:
|
||||
|
||||
.. include:: ../../tables/cinder-emc_sio.rst
|
@ -0,0 +1,339 @@
|
||||
=====================
|
||||
Dell EMC Unity driver
|
||||
=====================
|
||||
|
||||
Unity driver has been integrated in the OpenStack Block Storage project since
|
||||
the Ocata release. The driver is built on the top of Block Storage framework
|
||||
and a Dell EMC distributed Python package
|
||||
`storops <https://pypi.python.org/pypi/storops>`_.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
+-------------------+----------------+
|
||||
| Software | Version |
|
||||
+===================+================+
|
||||
| Unity OE | 4.1.X |
|
||||
+-------------------+----------------+
|
||||
| OpenStack | Ocata |
|
||||
+-------------------+----------------+
|
||||
| storops | 0.4.2 or newer |
|
||||
+-------------------+----------------+
|
||||
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Create an image from a volume.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Migrate a volume.
|
||||
- Get volume statistics.
|
||||
- Efficient non-disruptive volume backup.
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. note:: The following instructions should all be performed on Black Storage
|
||||
nodes.
|
||||
|
||||
#. Install `storops` from pypi:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pip install storops
|
||||
|
||||
|
||||
#. Add the following content into ``/etc/cinder/cinder.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = unity
|
||||
|
||||
[unity]
|
||||
# Storage protocol
|
||||
storage_protocol = iSCSI
|
||||
# Unisphere IP
|
||||
san_ip = <SAN IP>
|
||||
# Unisphere username and password
|
||||
san_login = <SAN LOGIN>
|
||||
san_password = <SAN PASSWORD>
|
||||
# Volume driver name
|
||||
volume_driver = cinder.volume.drivers.dell_emc.unity.Driver
|
||||
# backend's name
|
||||
volume_backend_name = Storage_ISCSI_01
|
||||
|
||||
.. note:: These are minimal options for Unity driver, for more options,
|
||||
see `Driver options`_.
|
||||
|
||||
|
||||
.. note:: (**Optional**) If you require multipath based data access, perform
|
||||
below steps on both Block Storage and Compute nodes.
|
||||
|
||||
|
||||
#. Install ``sysfsutils``, ``sg3-utils`` and ``multipath-tools``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install multipath-tools sg3-utils sysfsutils
|
||||
|
||||
|
||||
#. (Required for FC driver in case `Auto-zoning Support`_ is disabled) Zone the
|
||||
FC ports of Compute nodes with Unity FC target ports.
|
||||
|
||||
|
||||
#. Enable Unity storage optimized multipath configuration:
|
||||
|
||||
Add the following content into ``/etc/multipath.conf``
|
||||
|
||||
.. code-block:: vim
|
||||
|
||||
blacklist {
|
||||
# Skip the files uner /dev that are definitely not FC/iSCSI devices
|
||||
# Different system may need different customization
|
||||
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
|
||||
devnode "^hd[a-z][0-9]*"
|
||||
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
|
||||
|
||||
# Skip LUNZ device from VNX/Unity
|
||||
device {
|
||||
vendor "DGC"
|
||||
product "LUNZ"
|
||||
}
|
||||
}
|
||||
|
||||
defaults {
|
||||
user_friendly_names no
|
||||
flush_on_last_del yes
|
||||
}
|
||||
|
||||
devices {
|
||||
# Device attributed for EMC CLARiiON and VNX/Unity series ALUA
|
||||
device {
|
||||
vendor "DGC"
|
||||
product ".*"
|
||||
product_blacklist "LUNZ"
|
||||
path_grouping_policy group_by_prio
|
||||
path_selector "round-robin 0"
|
||||
path_checker emc_clariion
|
||||
features "0"
|
||||
no_path_retry 12
|
||||
hardware_handler "1 alua"
|
||||
prio alua
|
||||
failback immediate
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#. Restart the multipath service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service multipath-tools restart
|
||||
|
||||
|
||||
#. Enable multipath for image transfer in ``/etc/cinder/cinder.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_multipath_for_image_xfer = True
|
||||
|
||||
Restart the ``cinder-volume`` service to load the change.
|
||||
|
||||
#. Enable multipath for volume attache/detach in ``/etc/nova/nova.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
...
|
||||
volume_use_multipath = True
|
||||
...
|
||||
|
||||
#. Restart the ``nova-compute`` service.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: ../../tables/cinder-dell_emc_unity.rst
|
||||
|
||||
FC or iSCSI ports option
|
||||
------------------------
|
||||
|
||||
Specify the list of FC or iSCSI ports to be used to perform the IO. Wild card
|
||||
character is supported.
|
||||
For iSCSI ports, use the following format:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
unity_io_ports = spa_eth2, spb_eth2, *_eth3
|
||||
|
||||
For FC ports, use the following format:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
unity_io_ports = spa_iom_0_fc0, spb_iom_0_fc0, *_iom_0_fc1
|
||||
|
||||
List the port ID with the :command:`uemcli` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ uemcli /net/port/eth show -output csv
|
||||
...
|
||||
"spa_eth2","SP A Ethernet Port 2","spa","file, net, iscsi", ...
|
||||
"spb_eth2","SP B Ethernet Port 2","spb","file, net, iscsi", ...
|
||||
...
|
||||
|
||||
$ uemcli /net/port/fc show -output csv
|
||||
...
|
||||
"spa_iom_0_fc0","SP A I/O Module 0 FC Port 0","spa", ...
|
||||
"spb_iom_0_fc0","SP B I/O Module 0 FC Port 0","spb", ...
|
||||
...
|
||||
|
||||
Live migration integration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is suggested to have multipath configured on Compute nodes for robust data
|
||||
access in VM instances live migration scenario. Once ``user_friendly_names no``
|
||||
is set in defaults section of ``/etc/multipath.conf``, Compute nodes will use
|
||||
the WWID as the alias for the multipath devices.
|
||||
|
||||
To enable multipath in live migration:
|
||||
|
||||
.. note:: Make sure `Driver configuration`_ steps are performed before
|
||||
following steps.
|
||||
|
||||
#. Set multipath in ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
...
|
||||
volume_use_multipath = True
|
||||
...
|
||||
|
||||
Restart `nova-compute` service.
|
||||
|
||||
|
||||
#. Set ``user_friendly_names no`` in ``/etc/multipath.conf``
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
...
|
||||
defaults {
|
||||
user_friendly_names no
|
||||
}
|
||||
...
|
||||
|
||||
#. Restart the ``multipath-tools`` service.
|
||||
|
||||
|
||||
Thin and thick provisioning
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Only thin volume provisioning is supported in Unity volume driver.
|
||||
|
||||
|
||||
QoS support
|
||||
~~~~~~~~~~~
|
||||
|
||||
Unity driver supports ``maxBWS`` and ``maxIOPS`` specs for the back-end
|
||||
consumer type. ``maxBWS`` represents the ``Maximum IO/S`` absolute limit,
|
||||
``maxIOPS`` represents the ``Maximum Bandwidth (KBPS)`` absolute limit on the
|
||||
Unity respectively.
|
||||
|
||||
|
||||
Auto-zoning support
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Unity volume driver supports auto-zoning, and share the same configuration
|
||||
guide for other vendors. Refer to :ref:`fc_zone_manager`
|
||||
for detailed configuration steps.
|
||||
|
||||
Solution for LUNZ device
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The EMC host team also found LUNZ on all of the hosts, EMC best practice is to
|
||||
present a LUN with HLU 0 to clear any LUNZ devices as they can cause issues on
|
||||
the host. See KB `LUNZ Device <https://support.emc.com/kb/463402>`_.
|
||||
|
||||
To workaround this issue, Unity driver creates a `Dummy LUN` (if not present),
|
||||
and adds it to each host to occupy the `HLU 0` during volume attachment.
|
||||
|
||||
.. note:: This `Dummy LUN` is shared among all hosts connected to the Unity.
|
||||
|
||||
Efficient non-disruptive volume backup
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The default implementation in Block Storage for non-disruptive volume backup is
|
||||
not efficient since a cloned volume will be created during backup.
|
||||
|
||||
An effective approach to backups is to create a snapshot for the volume and
|
||||
connect this snapshot to the Block Storage host for volume backup.
|
||||
|
||||
Troubleshooting
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
To troubleshoot a failure in OpenStack deployment, the best way is to
|
||||
enable verbose and debug log, at the same time, leverage the build-in
|
||||
`Return request ID to caller
|
||||
<https://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html>`_
|
||||
to track specific Block Storage command logs.
|
||||
|
||||
|
||||
#. Enable verbose log, set following in ``/etc/cinder/cinder.conf`` and restart
|
||||
all Block Storage services:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
|
||||
...
|
||||
|
||||
debug = True
|
||||
verbose = True
|
||||
|
||||
...
|
||||
|
||||
|
||||
If other projects (usually Compute) are also involved, set `debug`
|
||||
and ``verbose`` to ``True``.
|
||||
|
||||
#. use ``--debug`` to trigger any problematic Block Storage operation:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cinder --debug create --name unity_vol1 100
|
||||
|
||||
|
||||
You will see the request ID from the console, for example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
DEBUG:keystoneauth:REQ: curl -g -i -X POST
|
||||
http://192.168.1.9:8776/v2/e50d22bdb5a34078a8bfe7be89324078/volumes -H
|
||||
"User-Agent: python-cinderclient" -H "Content-Type: application/json" -H
|
||||
"Accept: application/json" -H "X-Auth-Token:
|
||||
{SHA1}bf4a85ad64302b67a39ad7c6f695a9630f39ab0e" -d '{"volume": {"status":
|
||||
"creating", "user_id": null, "name": "unity_vol1", "imageRef": null,
|
||||
"availability_zone": null, "description": null, "multiattach": false,
|
||||
"attach_status": "detached", "volume_type": null, "metadata": {},
|
||||
"consistencygroup_id": null, "source_volid": null, "snapshot_id": null,
|
||||
"project_id": null, "source_replica": null, "size": 10}}'
|
||||
DEBUG:keystoneauth:RESP: [202] X-Compute-Request-Id:
|
||||
req-3a459e0e-871a-49f9-9796-b63cc48b5015 Content-Type: application/json
|
||||
Content-Length: 804 X-Openstack-Request-Id:
|
||||
req-3a459e0e-871a-49f9-9796-b63cc48b5015 Date: Mon, 12 Dec 2016 09:31:44 GMT
|
||||
Connection: keep-alive
|
||||
|
||||
#. Use commands like ``grep``, ``awk`` to find the error related to the Block
|
||||
Storage operations.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# grep "req-3a459e0e-871a-49f9-9796-b63cc48b5015" cinder-volume.log
|
||||
|
@ -0,0 +1,160 @@
|
||||
=============================
|
||||
Dell EqualLogic volume driver
|
||||
=============================
|
||||
|
||||
The Dell EqualLogic volume driver interacts with configured EqualLogic
|
||||
arrays and supports various operations.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Clone a volume.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The OpenStack Block Storage service supports:
|
||||
|
||||
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group
|
||||
Storage Pools and multiple pools on a single array.
|
||||
|
||||
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group
|
||||
Storage Pools or multiple pools on a single array.
|
||||
|
||||
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is
|
||||
dependent upon the generic block storage driver's SSH settings in the
|
||||
``/etc/cinder/cinder.conf`` file (see
|
||||
:ref:`block-storage-sample-configuration-file` for reference).
|
||||
|
||||
.. include:: ../../tables/cinder-eqlx.rst
|
||||
|
||||
Default (single-instance) configuration
|
||||
---------------------------------------
|
||||
|
||||
The following sample ``/etc/cinder/cinder.conf`` configuration lists the
|
||||
relevant settings for a typical Block Storage service using a single
|
||||
Dell EqualLogic Group:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# Required settings
|
||||
|
||||
volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver
|
||||
san_ip = IP_EQLX
|
||||
san_login = SAN_UNAME
|
||||
san_password = SAN_PW
|
||||
eqlx_group_name = EQLX_GROUP
|
||||
eqlx_pool = EQLX_POOL
|
||||
|
||||
# Optional settings
|
||||
|
||||
san_thin_provision = true|false
|
||||
use_chap_auth = true|false
|
||||
chap_username = EQLX_UNAME
|
||||
chap_password = EQLX_PW
|
||||
eqlx_cli_max_retries = 5
|
||||
san_ssh_port = 22
|
||||
ssh_conn_timeout = 30
|
||||
san_private_key = SAN_KEY_PATH
|
||||
ssh_min_pool_conn = 1
|
||||
ssh_max_pool_conn = 5
|
||||
|
||||
In this example, replace the following variables accordingly:
|
||||
|
||||
IP_EQLX
|
||||
The IP address used to reach the Dell EqualLogic Group through SSH.
|
||||
This field has no default value.
|
||||
|
||||
SAN_UNAME
|
||||
The user name to login to the Group manager via SSH at the
|
||||
``san_ip``. Default user name is ``grpadmin``.
|
||||
|
||||
SAN_PW
|
||||
The corresponding password of SAN_UNAME. Not used when
|
||||
``san_private_key`` is set. Default password is ``password``.
|
||||
|
||||
EQLX_GROUP
|
||||
The group to be used for a pool where the Block Storage service will
|
||||
create volumes and snapshots. Default group is ``group-0``.
|
||||
|
||||
EQLX_POOL
|
||||
The pool where the Block Storage service will create volumes and
|
||||
snapshots. Default pool is ``default``. This option cannot be used
|
||||
for multiple pools utilized by the Block Storage service on a single
|
||||
Dell EqualLogic Group.
|
||||
|
||||
EQLX_UNAME
|
||||
The CHAP login account for each volume in a pool, if
|
||||
``use_chap_auth`` is set to ``true``. Default account name is
|
||||
``chapadmin``.
|
||||
|
||||
EQLX_PW
|
||||
The corresponding password of EQLX_UNAME. The default password is
|
||||
randomly generated in hexadecimal, so you must set this password
|
||||
manually.
|
||||
|
||||
SAN_KEY_PATH (optional)
|
||||
The filename of the private key used for SSH authentication. This
|
||||
provides password-less login to the EqualLogic Group. Not used when
|
||||
``san_password`` is set. There is no default value.
|
||||
|
||||
In addition, enable thin provisioning for SAN volumes using the default
|
||||
``san_thin_provision = true`` setting.
|
||||
|
||||
Multiple back-end configuration
|
||||
-------------------------------
|
||||
|
||||
The following example shows the typical configuration for a Block
|
||||
Storage service that uses two Dell EqualLogic back ends:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
enabled_backends = backend1,backend2
|
||||
san_ssh_port = 22
|
||||
ssh_conn_timeout = 30
|
||||
san_thin_provision = true
|
||||
|
||||
[backend1]
|
||||
volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver
|
||||
volume_backend_name = backend1
|
||||
san_ip = IP_EQLX1
|
||||
san_login = SAN_UNAME
|
||||
san_password = SAN_PW
|
||||
eqlx_group_name = EQLX_GROUP
|
||||
eqlx_pool = EQLX_POOL
|
||||
|
||||
[backend2]
|
||||
volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver
|
||||
volume_backend_name = backend2
|
||||
san_ip = IP_EQLX2
|
||||
san_login = SAN_UNAME
|
||||
san_password = SAN_PW
|
||||
eqlx_group_name = EQLX_GROUP
|
||||
eqlx_pool = EQLX_POOL
|
||||
|
||||
In this example:
|
||||
|
||||
- Thin provisioning for SAN volumes is enabled
|
||||
(``san_thin_provision = true``). This is recommended when setting up
|
||||
Dell EqualLogic back ends.
|
||||
|
||||
- Each Dell EqualLogic back-end configuration (``[backend1]`` and
|
||||
``[backend2]``) has the same required settings as a single back-end
|
||||
configuration, with the addition of ``volume_backend_name``.
|
||||
|
||||
- The ``san_ssh_port`` option is set to its default value, 22. This
|
||||
option sets the port used for SSH.
|
||||
|
||||
- The ``ssh_conn_timeout`` option is also set to its default value, 30.
|
||||
This option sets the timeout in seconds for CLI commands over SSH.
|
||||
|
||||
- The ``IP_EQLX1`` and ``IP_EQLX2`` refer to the IP addresses used to
|
||||
reach the Dell EqualLogic Group of ``backend1`` and ``backend2``
|
||||
through SSH, respectively.
|
||||
|
||||
For information on configuring multiple back ends, see `Configure a
|
||||
multiple-storage back
|
||||
end <https://docs.openstack.org/admin-guide/blockstorage-multi-backend.html>`__.
|
@ -0,0 +1,361 @@
|
||||
===================================================
|
||||
Dell Storage Center Fibre Channel and iSCSI drivers
|
||||
===================================================
|
||||
|
||||
The Dell Storage Center volume driver interacts with configured Storage
|
||||
Center arrays.
|
||||
|
||||
The Dell Storage Center driver manages Storage Center arrays through
|
||||
the Dell Storage Manager (DSM). DSM connection settings and Storage
|
||||
Center options are defined in the ``cinder.conf`` file.
|
||||
|
||||
Prerequisite: Dell Storage Manager 2015 R1 or later must be used.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Dell Storage Center volume driver provides the following Cinder
|
||||
volume operations:
|
||||
|
||||
- Create, delete, attach (map), and detach (unmap) volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Create, delete, list and update a consistency group.
|
||||
- Create, delete, and list consistency group snapshots.
|
||||
- Manage an existing volume.
|
||||
- Failover-host for replicated back ends.
|
||||
- Create a replication using Live Volume.
|
||||
|
||||
Extra spec options
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Volume type extra specs can be used to enable a variety of Dell Storage
|
||||
Center options. Selecting Storage Profiles, Replay Profiles, enabling
|
||||
replication, replication options including Live Volume and Active Replay
|
||||
replication.
|
||||
|
||||
Storage Profiles control how Storage Center manages volume data. For a
|
||||
given volume, the selected Storage Profile dictates which disk tier
|
||||
accepts initial writes, as well as how data progression moves data
|
||||
between tiers to balance performance and cost. Predefined Storage
|
||||
Profiles are the most effective way to manage data in Storage Center.
|
||||
|
||||
By default, if no Storage Profile is specified in the volume extra
|
||||
specs, the default Storage Profile for the user account configured for
|
||||
the Block Storage driver is used. The extra spec key
|
||||
``storagetype:storageprofile`` with the value of the name of the Storage
|
||||
Profile on the Storage Center can be set to allow to use Storage
|
||||
Profiles other than the default.
|
||||
|
||||
For ease of use from the command line, spaces in Storage Profile names
|
||||
are ignored. As an example, here is how to define two volume types using
|
||||
the ``High Priority`` and ``Low Priority`` Storage Profiles:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "GoldVolumeType"
|
||||
$ openstack volume type set --property storagetype:storageprofile=highpriority "GoldVolumeType"
|
||||
$ openstack volume type create "BronzeVolumeType"
|
||||
$ openstack volume type set --property storagetype:storageprofile=lowpriority "BronzeVolumeType"
|
||||
|
||||
Replay Profiles control how often the Storage Center takes a replay of a
|
||||
given volume and how long those replays are kept. The default profile is
|
||||
the ``daily`` profile that sets the replay to occur once a day and to
|
||||
persist for one week.
|
||||
|
||||
The extra spec key ``storagetype:replayprofiles`` with the value of the
|
||||
name of the Replay Profile or profiles on the Storage Center can be set
|
||||
to allow to use Replay Profiles other than the default ``daily`` profile.
|
||||
|
||||
As an example, here is how to define a volume type using the ``hourly``
|
||||
Replay Profile and another specifying both ``hourly`` and the default
|
||||
``daily`` profile:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "HourlyType"
|
||||
$ openstack volume type set --property storagetype:replayprofile=hourly "HourlyType"
|
||||
$ openstack volume type create "HourlyAndDailyType"
|
||||
$ openstack volume type set --property storagetype:replayprofiles=hourly,daily "HourlyAndDailyType"
|
||||
|
||||
Note the comma separated string for the ``HourlyAndDailyType``.
|
||||
|
||||
Replication for a given volume type is enabled via the extra spec
|
||||
``replication_enabled``.
|
||||
|
||||
To create a volume type that specifies only replication enabled back ends:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "ReplicationType"
|
||||
$ openstack volume type set --property replication_enabled='<is> True' "ReplicationType"
|
||||
|
||||
Extra specs can be used to configure replication. In addition to the Replay
|
||||
Profiles above, ``replication:activereplay`` can be set to enable replication
|
||||
of the volume's active replay. And the replication type can be changed to
|
||||
synchronous via the ``replication_type`` extra spec can be set.
|
||||
|
||||
To create a volume type that enables replication of the active replay:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "ReplicationType"
|
||||
$ openstack volume type key --property replication_enabled='<is> True' "ReplicationType"
|
||||
$ openstack volume type key --property replication:activereplay='<is> True' "ReplicationType"
|
||||
|
||||
To create a volume type that enables synchronous replication :
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "ReplicationType"
|
||||
$ openstack volume type key --property replication_enabled='<is> True' "ReplicationType"
|
||||
$ openstack volume type key --property replication_type='<is> sync' "ReplicationType"
|
||||
|
||||
To create a volume type that enables replication using Live Volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "ReplicationType"
|
||||
$ openstack volume type key --property replication_enabled='<is> True' "ReplicationType"
|
||||
$ openstack volume type key --property replication:livevolume='<is> True' "ReplicationType"
|
||||
|
||||
If QOS options are enabled on the Storage Center they can be enabled via extra
|
||||
specs. The name of the Volume QOS can be specified via the
|
||||
``storagetype:volumeqos`` extra spec. Likewise the name of the Group QOS to
|
||||
use can be specificed via the ``storagetype:groupqos`` extra spec. Volumes
|
||||
created with these extra specs set will be added to the specified QOS groups.
|
||||
|
||||
To create a volume type that sets both Volume and Group QOS:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "StorageCenterQOS"
|
||||
$ openstack volume type key --property 'storagetype:volumeqos'='unlimited' "StorageCenterQOS"
|
||||
$ openstack volume type key --property 'storagetype:groupqos'='limited' "StorageCenterQOS"
|
||||
|
||||
Data reduction profiles can be specified in the
|
||||
``storagetype:datareductionprofile`` extra spec. Available options are None,
|
||||
Compression, and Deduplication. Note that not all options are available on
|
||||
every Storage Center.
|
||||
|
||||
To create volume types that support no compression, compression, and
|
||||
deduplication and compression respectively:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create "NoCompressionType"
|
||||
$ openstack volume type key --property 'storagetype:datareductionprofile'='None' "NoCompressionType"
|
||||
$ openstack volume type create "CompressedType"
|
||||
$ openstack volume type key --property 'storagetype:datareductionprofile'='Compression' "CompressedType"
|
||||
$ openstack volume type create "DedupType"
|
||||
$ openstack volume type key --property 'storagetype:datareductionprofile'='Deduplication' "DedupType"
|
||||
|
||||
Note: The default is no compression.
|
||||
|
||||
iSCSI configuration
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the configuration file for iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_volume_type = delliscsi
|
||||
enabled_backends = delliscsi
|
||||
|
||||
[delliscsi]
|
||||
# Name to give this storage back-end
|
||||
volume_backend_name = delliscsi
|
||||
# The iSCSI driver to load
|
||||
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
|
||||
# IP address of DSM
|
||||
san_ip = 172.23.8.101
|
||||
# DSM user name
|
||||
san_login = Admin
|
||||
# DSM password
|
||||
san_password = secret
|
||||
# The Storage Center serial number to use
|
||||
dell_sc_ssn = 64702
|
||||
|
||||
# ==Optional settings==
|
||||
|
||||
# The DSM API port
|
||||
dell_sc_api_port = 3033
|
||||
# Server folder to place new server definitions
|
||||
dell_sc_server_folder = devstacksrv
|
||||
# Volume folder to place created volumes
|
||||
dell_sc_volume_folder = devstackvol/Cinder
|
||||
|
||||
Fibre Channel configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the configuration file for fibre
|
||||
channel:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_volume_type = dellfc
|
||||
enabled_backends = dellfc
|
||||
|
||||
[dellfc]
|
||||
# Name to give this storage back-end
|
||||
volume_backend_name = dellfc
|
||||
# The FC driver to load
|
||||
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
|
||||
# IP address of the DSM
|
||||
san_ip = 172.23.8.101
|
||||
# DSM user name
|
||||
san_login = Admin
|
||||
# DSM password
|
||||
san_password = secret
|
||||
# The Storage Center serial number to use
|
||||
dell_sc_ssn = 64702
|
||||
|
||||
# ==Optional settings==
|
||||
|
||||
# The DSM API port
|
||||
dell_sc_api_port = 3033
|
||||
# Server folder to place new server definitions
|
||||
dell_sc_server_folder = devstacksrv
|
||||
# Volume folder to place created volumes
|
||||
dell_sc_volume_folder = devstackvol/Cinder
|
||||
|
||||
Dual DSM
|
||||
~~~~~~~~
|
||||
|
||||
It is possible to specify a secondary DSM to use in case the primary DSM fails.
|
||||
|
||||
Configuration is done through the cinder.conf. Both DSMs have to be
|
||||
configured to manage the same set of Storage Centers for this backend. That
|
||||
means the dell_sc_ssn and any Storage Centers used for replication or Live
|
||||
Volume.
|
||||
|
||||
Add network and credential information to the backend to enable Dual DSM.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[dell]
|
||||
# The IP address and port of the secondary DSM.
|
||||
secondary_san_ip = 192.168.0.102
|
||||
secondary_sc_api_port = 3033
|
||||
# Specify credentials for the secondary DSM.
|
||||
secondary_san_login = Admin
|
||||
secondary_san_password = secret
|
||||
|
||||
The driver will use the primary until a failure. At that point it will attempt
|
||||
to use the secondary. It will continue to use the secondary until the volume
|
||||
service is restarted or the secondary fails at which point it will attempt to
|
||||
use the primary.
|
||||
|
||||
Replication configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Add the following to the back-end specification to specify another Storage
|
||||
Center to replicate to.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[dell]
|
||||
replication_device = target_device_id: 65495, qosnode: cinderqos
|
||||
|
||||
The ``target_device_id`` is the SSN of the remote Storage Center and the
|
||||
``qosnode`` is the QoS Node setup between the two Storage Centers.
|
||||
|
||||
Note that more than one ``replication_device`` line can be added. This will
|
||||
slow things down, however.
|
||||
|
||||
A volume is only replicated if the volume is of a volume-type that has
|
||||
the extra spec ``replication_enabled`` set to ``<is> True``.
|
||||
|
||||
Replication notes
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
This driver supports both standard replication and Live Volume (if supported
|
||||
and licensed). The main difference is that a VM attached to a Live Volume is
|
||||
mapped to both Storage Centers. In the case of a failure of the primary Live
|
||||
Volume still requires a failover-host to move control of the volume to the
|
||||
second controller.
|
||||
|
||||
Existing mappings should work and not require the instance to be remapped but
|
||||
it might need to be rebooted.
|
||||
|
||||
Live Volume is more resource intensive than replication. One should be sure
|
||||
to plan accordingly.
|
||||
|
||||
Failback
|
||||
~~~~~~~~
|
||||
|
||||
The failover-host command is designed for the case where the primary system is
|
||||
not coming back. If it has been executed and the primary has been restored it
|
||||
is possible to attempt a failback.
|
||||
|
||||
Simply specify default as the backend_id.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder failover-host cinder@delliscsi --backend_id default
|
||||
|
||||
Non trivial heavy lifting is done by this command. It attempts to recover best
|
||||
it can but if things have diverged to far it can only do so much. It is also a
|
||||
one time only command so do not reboot or restart the service in the middle of
|
||||
it.
|
||||
|
||||
Failover and failback are significant operations under OpenStack Cinder. Be
|
||||
sure to consult with support before attempting.
|
||||
|
||||
Server type configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This option allows one to set a default Server OS type to use when creating
|
||||
a server definition on the Dell Storage Center.
|
||||
|
||||
When attaching a volume to a node the Dell Storage Center driver creates a
|
||||
server definition on the storage array. This defition includes a Server OS
|
||||
type. The type used by the Dell Storage Center cinder driver is
|
||||
"Red Hat Linux 6.x". This is a modern operating system definition that supports
|
||||
all the features of an OpenStack node.
|
||||
|
||||
Add the following to the back-end specification to specify the Server OS to use
|
||||
when creating a server definition. The server type used must come from the drop
|
||||
down list in the DSM.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[dell]
|
||||
dell_server_os = 'Red Hat Linux 7.x'
|
||||
|
||||
Note that this server definition is created once. Changing this setting after
|
||||
the fact will not change an existing definition. The selected Server OS does
|
||||
not have to match the actual OS used on the node.
|
||||
|
||||
Excluding a domain
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This option excludes a Storage Center ISCSI fault domain from the ISCSI
|
||||
properties returned by the initialize_connection call. This only applies to
|
||||
the ISCSI driver.
|
||||
|
||||
Add the excluded_domain_ip option into the backend config for each fault domain
|
||||
to be excluded. This option takes the specified Target IPv4 Address listed
|
||||
under the fault domain. Older versions of DSM (EM) may list this as the Well
|
||||
Known IP Address.
|
||||
|
||||
Add the following to the back-end specification to exclude the domains at
|
||||
172.20.25.15 and 172.20.26.15.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[dell]
|
||||
excluded_domain_ip=172.20.25.15
|
||||
excluded_domain_ip=172.20.26.15
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options specific to the
|
||||
Dell Storage Center volume driver.
|
||||
|
||||
.. include:: ../../tables/cinder-dellsc.rst
|
@ -0,0 +1,168 @@
|
||||
===================================================
|
||||
Dot Hill AssuredSAN Fibre Channel and iSCSI drivers
|
||||
===================================================
|
||||
|
||||
The ``DotHillFCDriver`` and ``DotHillISCSIDriver`` volume drivers allow
|
||||
Dot Hill arrays to be used for block storage in OpenStack deployments.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the Dot Hill drivers, the following are required:
|
||||
|
||||
- Dot Hill AssuredSAN array with:
|
||||
|
||||
- iSCSI or FC host interfaces
|
||||
- G22x firmware or later
|
||||
- Appropriate licenses for the snapshot and copy volume features
|
||||
|
||||
- Network connectivity between the OpenStack host and the array
|
||||
management interfaces
|
||||
|
||||
- HTTPS or HTTP must be enabled on the array
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Migrate a volume with back-end assistance.
|
||||
- Retype a volume.
|
||||
- Manage and unmanage a volume.
|
||||
|
||||
Configuring the array
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Verify that the array can be managed via an HTTPS connection. HTTP can
|
||||
also be used if ``dothill_api_protocol=http`` is placed into the
|
||||
appropriate sections of the ``cinder.conf`` file.
|
||||
|
||||
Confirm that virtual pools A and B are present if you plan to use
|
||||
virtual pools for OpenStack storage.
|
||||
|
||||
If you plan to use vdisks instead of virtual pools, create or identify
|
||||
one or more vdisks to be used for OpenStack storage; typically this will
|
||||
mean creating or setting aside one disk group for each of the A and B
|
||||
controllers.
|
||||
|
||||
#. Edit the ``cinder.conf`` file to define an storage back-end entry for
|
||||
each storage pool on the array that will be managed by OpenStack. Each
|
||||
entry consists of a unique section name, surrounded by square brackets,
|
||||
followed by options specified in ``key=value`` format.
|
||||
|
||||
- The ``dothill_backend_name`` value specifies the name of the storage
|
||||
pool or vdisk on the array.
|
||||
|
||||
- The ``volume_backend_name`` option value can be a unique value, if
|
||||
you wish to be able to assign volumes to a specific storage pool on
|
||||
the array, or a name that is shared among multiple storage pools to
|
||||
let the volume scheduler choose where new volumes are allocated.
|
||||
|
||||
- The rest of the options will be repeated for each storage pool in a
|
||||
given array: the appropriate Cinder driver name; IP address or
|
||||
hostname of the array management interface; the username and password
|
||||
of an array user account with ``manage`` privileges; and the iSCSI IP
|
||||
addresses for the array if using the iSCSI transport protocol.
|
||||
|
||||
In the examples below, two back ends are defined, one for pool A and one
|
||||
for pool B, and a common ``volume_backend_name`` is used so that a
|
||||
single volume type definition can be used to allocate volumes from both
|
||||
pools.
|
||||
|
||||
|
||||
**iSCSI example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
dothill_backend_name = A
|
||||
volume_backend_name = dothill-array
|
||||
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
[pool-b]
|
||||
dothill_backend_name = B
|
||||
volume_backend_name = dothill-array
|
||||
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
**Fibre Channel example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
dothill_backend_name = A
|
||||
volume_backend_name = dothill-array
|
||||
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
[pool-b]
|
||||
dothill_backend_name = B
|
||||
volume_backend_name = dothill-array
|
||||
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
#. If any ``volume_backend_name`` value refers to a vdisk rather than a
|
||||
virtual pool, add an additional statement
|
||||
``dothill_backend_type = linear`` to that back-end entry.
|
||||
|
||||
#. If HTTPS is not enabled in the array, include
|
||||
``dothill_api_protocol = http`` in each of the back-end definitions.
|
||||
|
||||
#. If HTTPS is enabled, you can enable certificate verification with the
|
||||
option ``dothill_verify_certificate=True``. You may also use the
|
||||
``dothill_verify_certificate_path`` parameter to specify the path to a
|
||||
CA\_BUNDLE file containing CAs other than those in the default list.
|
||||
|
||||
#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an
|
||||
``enabled_backends`` parameter specifying the back-end entries you added,
|
||||
and a ``default_volume_type`` parameter specifying the name of a volume
|
||||
type that you will create in the next step.
|
||||
|
||||
**Example of [DEFAULT] section changes**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = pool-a,pool-b
|
||||
default_volume_type = dothill
|
||||
# ...
|
||||
|
||||
#. Create a new volume type for each distinct ``volume_backend_name`` value
|
||||
that you added to cinder.conf. The example below assumes that the same
|
||||
``volume_backend_name=dothill-array`` option was specified in all of the
|
||||
entries, and specifies that the volume type ``dothill`` can be used to
|
||||
allocate volumes from any of them.
|
||||
|
||||
**Example of creating a volume type**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create dothill
|
||||
$ openstack volume type set --property volume_backend_name=dothill-array dothill
|
||||
|
||||
#. After modifying ``cinder.conf``, restart the ``cinder-volume`` service.
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific
|
||||
to the Dot Hill drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-dothill.rst
|
File diff suppressed because it is too large
Load Diff
1121
doc/source/config-reference/block-storage/drivers/emc-vnx-driver.rst
Normal file
1121
doc/source/config-reference/block-storage/drivers/emc-vnx-driver.rst
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,251 @@
|
||||
==============================================
|
||||
EMC XtremIO Block Storage driver configuration
|
||||
==============================================
|
||||
|
||||
The high performance XtremIO All Flash Array (AFA) offers Block Storage
|
||||
services to OpenStack. Using the driver, OpenStack Block Storage hosts
|
||||
can connect to an XtremIO Storage cluster.
|
||||
|
||||
This section explains how to configure and connect the block
|
||||
storage nodes to an XtremIO storage cluster.
|
||||
|
||||
Support matrix
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
XtremIO version 4.x is supported.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, clone, attach, and detach volumes.
|
||||
|
||||
- Create and delete volume snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
- Manage and unmanage a volume.
|
||||
|
||||
- Manage and unmanage a snapshot.
|
||||
|
||||
- Get volume statistics.
|
||||
|
||||
- Create, modify, delete, and list consistency groups.
|
||||
|
||||
- Create, modify, delete, and list snapshots of consistency groups.
|
||||
|
||||
- Create consistency group from consistency group or consistency group
|
||||
snapshot.
|
||||
|
||||
- Volume Migration (host assisted)
|
||||
|
||||
XtremIO Block Storage driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Edit the ``cinder.conf`` file by adding the configuration below under
|
||||
the [DEFAULT] section of the file in case of a single back end or
|
||||
under a separate section in case of multiple back ends (for example
|
||||
[XTREMIO]). The configuration file is usually located under the
|
||||
following path ``/etc/cinder/cinder.conf``.
|
||||
|
||||
.. include:: ../../tables/cinder-emc_xtremio.rst
|
||||
|
||||
For a configuration example, refer to the configuration
|
||||
:ref:`emc_extremio_configuration_example`.
|
||||
|
||||
XtremIO driver name
|
||||
-------------------
|
||||
|
||||
Configure the driver name by setting the following parameter in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
- For iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
|
||||
|
||||
- For Fibre Channel:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
|
||||
|
||||
XtremIO management server (XMS) IP
|
||||
----------------------------------
|
||||
|
||||
To retrieve the management IP, use the :command:`show-xms` CLI command.
|
||||
|
||||
Configure the management IP by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_ip = XMS Management IP
|
||||
|
||||
XtremIO cluster name
|
||||
--------------------
|
||||
|
||||
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In
|
||||
such setups, the administrator is required to specify the cluster name (in
|
||||
addition to the XMS IP). Each cluster must be defined as a separate back end.
|
||||
|
||||
To retrieve the cluster name, run the :command:`show-clusters` CLI command.
|
||||
|
||||
Configure the cluster name by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
xtremio_cluster_name = Cluster-Name
|
||||
|
||||
.. note::
|
||||
|
||||
When a single cluster is managed in XtremIO version 4.0, the cluster name is
|
||||
not required.
|
||||
|
||||
XtremIO user credentials
|
||||
------------------------
|
||||
|
||||
OpenStack Block Storage requires an XtremIO XMS user with administrative
|
||||
privileges. XtremIO recommends creating a dedicated OpenStack user account that
|
||||
holds an administrative user role.
|
||||
|
||||
Refer to the XtremIO User Guide for details on user account management.
|
||||
|
||||
Create an XMS account using either the XMS GUI or the
|
||||
:command:`add-user-account` CLI command.
|
||||
|
||||
Configure the user credentials by adding the following parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_login = XMS username
|
||||
san_password = XMS username password
|
||||
|
||||
Multiple back ends
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configuring multiple storage back ends enables you to create several back-end
|
||||
storage solutions that serve the same OpenStack Compute resources.
|
||||
|
||||
When a volume is created, the scheduler selects the appropriate back end to
|
||||
handle the request, according to the specified volume type.
|
||||
|
||||
Setting thin provisioning and multipathing parameters
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To support thin provisioning and multipathing in the XtremIO Array, the
|
||||
following parameters from the Nova and Cinder configuration files should be
|
||||
modified as follows:
|
||||
|
||||
- Thin Provisioning
|
||||
|
||||
All XtremIO volumes are thin provisioned. The default value of 20 should be
|
||||
maintained for the ``max_over_subscription_ratio`` parameter.
|
||||
|
||||
The ``use_cow_images`` parameter in the ``nova.conf`` file should be set to
|
||||
``False`` as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_cow_images = False
|
||||
|
||||
- Multipathing
|
||||
|
||||
The ``use_multipath_for_image_xfer`` parameter in the ``cinder.conf`` file
|
||||
should be set to ``True`` as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_multipath_for_image_xfer = True
|
||||
|
||||
|
||||
Image service optimization
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Limit the number of copies (XtremIO snapshots) taken from each image cache.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
xtremio_volumes_per_glance_cache = 100
|
||||
|
||||
The default value is ``100``. A value of ``0`` ignores the limit and defers to
|
||||
the array maximum as the effective limit.
|
||||
|
||||
SSL certification
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable SSL certificate validation, modify the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
driver_ssl_cert_verify = true
|
||||
|
||||
By default, SSL certificate validation is disabled.
|
||||
|
||||
To specify a non-default path to ``CA_Bundle`` file or directory with
|
||||
certificates of trusted CAs:
|
||||
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
driver_ssl_cert_path = Certificate path
|
||||
|
||||
Configuring CHAP
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
The XtremIO Block Storage driver supports CHAP initiator authentication and
|
||||
discovery.
|
||||
|
||||
If CHAP initiator authentication is required, set the CHAP
|
||||
Authentication mode to initiator.
|
||||
|
||||
To set the CHAP initiator mode using CLI, run the following XMCLI command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ modify-chap chap-authentication-mode=initiator
|
||||
|
||||
If CHAP initiator discovery is required, set the CHAP discovery mode to
|
||||
initiator.
|
||||
|
||||
To set the CHAP initiator discovery mode using CLI, run the following XMCLI
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ modify-chap chap-discovery-mode=initiator
|
||||
|
||||
The CHAP initiator modes can also be set via the XMS GUI.
|
||||
|
||||
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
|
||||
|
||||
The CHAP initiator authentication and discovery credentials (username and
|
||||
password) are generated automatically by the Block Storage driver. Therefore,
|
||||
there is no need to configure the initial CHAP credentials manually in XMS.
|
||||
|
||||
.. _emc_extremio_configuration_example:
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can update the ``cinder.conf`` file by editing the necessary parameters as
|
||||
follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Default]
|
||||
enabled_backends = XtremIO
|
||||
|
||||
[XtremIO]
|
||||
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
|
||||
san_ip = XMS_IP
|
||||
xtremio_cluster_name = Cluster01
|
||||
san_login = XMS_USER
|
||||
san_password = XMS_PASSWD
|
||||
volume_backend_name = XtremIOAFA
|
@ -0,0 +1,117 @@
|
||||
=======================================================
|
||||
FalconStor FSS Storage Fibre Channel and iSCSI drivers
|
||||
=======================================================
|
||||
|
||||
The ``FSSISCSIDriver`` and ``FSSFCDriver`` drivers run volume operations
|
||||
by communicating with the FalconStor FSS storage system over HTTP.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the FalconStor FSS drivers, the following are required:
|
||||
|
||||
- FalconStor FSS storage with:
|
||||
|
||||
- iSCSI or FC host interfaces
|
||||
|
||||
- FSS-8.00-8865 or later
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The FalconStor volume driver provides the following Cinder
|
||||
volume operations:
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Get volume statistics.
|
||||
|
||||
* Create and delete consistency group.
|
||||
|
||||
* Create and delete consistency group snapshots.
|
||||
|
||||
* Modify consistency groups.
|
||||
|
||||
* Manage and unmanage a volume.
|
||||
|
||||
iSCSI configuration
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the configuration file for iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_volume_type = FSS
|
||||
enabled_backends = FSS
|
||||
|
||||
[FSS]
|
||||
|
||||
# IP address of FSS server
|
||||
san_ip = 172.23.0.1
|
||||
# FSS server user name
|
||||
san_login = Admin
|
||||
# FSS server password
|
||||
san_password = secret
|
||||
# FSS server storage pool id list
|
||||
fss_pools=P:2,O:3
|
||||
# Name to give this storage back-end
|
||||
volume_backend_name = FSSISCSIDriver
|
||||
# The iSCSI driver to load
|
||||
volume_driver = cinder.volume.drivers.falconstor.iscsi.FSSISCSIDriver
|
||||
|
||||
|
||||
# ==Optional settings==
|
||||
|
||||
# Enable FSS log message
|
||||
fss_debug = true
|
||||
# Enable FSS thin provision
|
||||
san_thin_provision=true
|
||||
|
||||
Fibre Channel configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the configuration file for fibre
|
||||
channel:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_volume_type = FSSFC
|
||||
enabled_backends = FSSFC
|
||||
|
||||
[FSSFC]
|
||||
# IP address of FSS server
|
||||
san_ip = 172.23.0.2
|
||||
# FSS server user name
|
||||
san_login = Admin
|
||||
# FSS server password
|
||||
san_password = secret
|
||||
# FSS server storage pool id list
|
||||
fss_pools=A:1
|
||||
# Name to give this storage back-end
|
||||
volume_backend_name = FSSFCDriver
|
||||
# The FC driver to load
|
||||
volume_driver = cinder.volume.drivers.falconstor.fc.FSSFCDriver
|
||||
|
||||
|
||||
# ==Optional settings==
|
||||
|
||||
# Enable FSS log message
|
||||
fss_debug = true
|
||||
# Enable FSS thin provision
|
||||
san_thin_provision=true
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options specific to the
|
||||
FalconStor FSS storage volume driver.
|
||||
|
||||
.. include:: ../../tables/cinder-falconstor.rst
|
@ -0,0 +1,225 @@
|
||||
=========================
|
||||
Fujitsu ETERNUS DX driver
|
||||
=========================
|
||||
|
||||
Fujitsu ETERNUS DX driver provides FC and iSCSI support for
|
||||
ETERNUS DX S3 series.
|
||||
|
||||
The driver performs volume operations by communicating with
|
||||
ETERNUS DX. It uses a CIM client in Python called PyWBEM
|
||||
to perform CIM operations over HTTP.
|
||||
|
||||
You can specify RAID Group and Thin Provisioning Pool (TPP)
|
||||
in ETERNUS DX as a storage pool.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Supported storages:
|
||||
|
||||
* ETERNUS DX60 S3
|
||||
* ETERNUS DX100 S3/DX200 S3
|
||||
* ETERNUS DX500 S3/DX600 S3
|
||||
* ETERNUS DX8700 S3/DX8900 S3
|
||||
* ETERNUS DX200F
|
||||
|
||||
Requirements:
|
||||
|
||||
* Firmware version V10L30 or later is required.
|
||||
* The multipath environment with ETERNUS Multipath Driver is unsupported.
|
||||
* An Advanced Copy Feature license is required
|
||||
to create a snapshot and a clone.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy an image to a volume.
|
||||
* Copy a volume to an image.
|
||||
* Clone a volume.
|
||||
* Extend a volume. (\*1)
|
||||
* Get volume statistics.
|
||||
|
||||
(\*1): It is executable only when you use TPP as a storage pool.
|
||||
|
||||
Preparation
|
||||
~~~~~~~~~~~
|
||||
|
||||
Package installation
|
||||
--------------------
|
||||
|
||||
Install the ``python-pywbem`` package for your distribution.
|
||||
|
||||
ETERNUS DX setup
|
||||
----------------
|
||||
|
||||
Perform the following steps using ETERNUS Web GUI or ETERNUS CLI.
|
||||
|
||||
.. note::
|
||||
* These following operations require an account that has the ``Admin`` role.
|
||||
* For detailed operations, refer to ETERNUS Web GUI User's Guide or
|
||||
ETERNUS CLI User's Guide for ETERNUS DX S3 series.
|
||||
|
||||
#. Create an account for communication with cinder controller.
|
||||
|
||||
#. Enable the SMI-S of ETERNUS DX.
|
||||
|
||||
#. Register an Advanced Copy Feature license and configure copy table size.
|
||||
|
||||
#. Create a storage pool for volumes.
|
||||
|
||||
#. (Optional) If you want to create snapshots
|
||||
on a different storage pool for volumes,
|
||||
create a storage pool for snapshots.
|
||||
|
||||
#. Create Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for
|
||||
``create a snapshot``.
|
||||
|
||||
#. Configure storage ports used for OpenStack.
|
||||
|
||||
- Set those storage ports to CA mode.
|
||||
- Enable the host-affinity settings of those storage ports.
|
||||
|
||||
(ETERNUS CLI command for enabling host-affinity settings):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
CLI> set fc-parameters -host-affinity enable -port <CM#><CA#><Port#>
|
||||
CLI> set iscsi-parameters -host-affinity enable -port <CM#><CA#><Port#>
|
||||
|
||||
#. Ensure LAN connection between cinder controller and MNT port of ETERNUS DX
|
||||
and SAN connection between Compute nodes and CA ports of ETERNUS DX.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Add the following entries to ``/etc/cinder/cinder.conf``:
|
||||
|
||||
FC entries:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
|
||||
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
|
||||
|
||||
iSCSI entries:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
|
||||
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
|
||||
|
||||
If there is no description about ``cinder_eternus_config_file``,
|
||||
then the parameter is set to default value
|
||||
``/etc/cinder/cinder_fujitsu_eternus_dx.xml``.
|
||||
|
||||
#. Create a driver configuration file.
|
||||
|
||||
Create a driver configuration file in the file path specified
|
||||
as ``cinder_eternus_config_file`` in ``cinder.conf``,
|
||||
and add parameters to the file as below:
|
||||
|
||||
FC configuration:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<FUJITSU>
|
||||
<EternusIP>0.0.0.0</EternusIP>
|
||||
<EternusPort>5988</EternusPort>
|
||||
<EternusUser>smisuser</EternusUser>
|
||||
<EternusPassword>smispassword</EternusPassword>
|
||||
<EternusPool>raid5_0001</EternusPool>
|
||||
<EternusSnapPool>raid5_0001</EternusSnapPool>
|
||||
</FUJITSU>
|
||||
|
||||
iSCSI configuration:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<FUJITSU>
|
||||
<EternusIP>0.0.0.0</EternusIP>
|
||||
<EternusPort>5988</EternusPort>
|
||||
<EternusUser>smisuser</EternusUser>
|
||||
<EternusPassword>smispassword</EternusPassword>
|
||||
<EternusPool>raid5_0001</EternusPool>
|
||||
<EternusSnapPool>raid5_0001</EternusSnapPool>
|
||||
<EternusISCSIIP>1.1.1.1</EternusISCSIIP>
|
||||
<EternusISCSIIP>1.1.1.2</EternusISCSIIP>
|
||||
<EternusISCSIIP>1.1.1.3</EternusISCSIIP>
|
||||
<EternusISCSIIP>1.1.1.4</EternusISCSIIP>
|
||||
</FUJITSU>
|
||||
|
||||
Where:
|
||||
|
||||
``EternusIP``
|
||||
IP address for the SMI-S connection of the ETRENUS DX.
|
||||
|
||||
Enter the IP address of MNT port of the ETERNUS DX.
|
||||
|
||||
``EternusPort``
|
||||
Port number for the SMI-S connection port of the ETERNUS DX.
|
||||
|
||||
``EternusUser``
|
||||
User name for the SMI-S connection of the ETERNUS DX.
|
||||
|
||||
``EternusPassword``
|
||||
Password for the SMI-S connection of the ETERNUS DX.
|
||||
|
||||
``EternusPool``
|
||||
Storage pool name for volumes.
|
||||
|
||||
Enter RAID Group name or TPP name in the ETERNUS DX.
|
||||
|
||||
``EternusSnapPool``
|
||||
Storage pool name for snapshots.
|
||||
|
||||
Enter RAID Group name in the ETERNUS DX.
|
||||
|
||||
``EternusISCSIIP`` (Multiple setting allowed)
|
||||
iSCSI connection IP address of the ETERNUS DX.
|
||||
|
||||
.. note::
|
||||
|
||||
* For ``EternusSnapPool``, you can specify only RAID Group name
|
||||
and cannot specify TPP name.
|
||||
* You can specify the same RAID Group name for ``EternusPool`` and ``EternusSnapPool``
|
||||
if you create volumes and snapshots on a same storage pool.
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Edit ``cinder.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = DXFC, DXISCSI
|
||||
|
||||
[DXFC]
|
||||
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
|
||||
cinder_eternus_config_file = /etc/cinder/fc.xml
|
||||
volume_backend_name = FC
|
||||
|
||||
[DXISCSI]
|
||||
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
|
||||
cinder_eternus_config_file = /etc/cinder/iscsi.xml
|
||||
volume_backend_name = ISCSI
|
||||
|
||||
#. Create the driver configuration files ``fc.xml`` and ``iscsi.xml``.
|
||||
|
||||
#. Create a volume type and set extra specs to the type:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create DX_FC
|
||||
$ openstack volume type set --property volume_backend_name=FC DX_FX
|
||||
$ openstack volume type create DX_ISCSI
|
||||
$ openstack volume type set --property volume_backend_name=ISCSI DX_ISCSI
|
||||
|
||||
By issuing these commands,
|
||||
the volume type ``DX_FC`` is associated with the ``FC``,
|
||||
and the type ``DX_ISCSI`` is associated with the ``ISCSI``.
|
@ -0,0 +1,548 @@
|
||||
==========================================
|
||||
Hitachi NAS Platform NFS driver
|
||||
==========================================
|
||||
|
||||
This OpenStack Block Storage volume drivers provides NFS support
|
||||
for `Hitachi NAS Platform (HNAS) <http://www.hds.com/products/file-and-content/
|
||||
network-attached-storage/>`_ Models 3080, 3090, 4040, 4060, 4080, and 4100
|
||||
with NAS OS 12.2 or higher.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The NFS driver support these operations:
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy an image to a volume.
|
||||
* Copy a volume to an image.
|
||||
* Clone a volume.
|
||||
* Extend a volume.
|
||||
* Get volume statistics.
|
||||
* Manage and unmanage a volume.
|
||||
* Manage and unmanage snapshots (`HNAS NFS only`).
|
||||
* List manageable volumes and snapshots (`HNAS NFS only`).
|
||||
|
||||
HNAS storage requirements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before using NFS services, use the HNAS configuration and management
|
||||
GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally:
|
||||
|
||||
1. General:
|
||||
|
||||
* It is mandatory to have at least ``1 storage pool, 1 EVS and 1 file
|
||||
system`` to be able to run any of the HNAS drivers.
|
||||
* HNAS drivers consider the space allocated to the file systems to
|
||||
provide the reports to cinder. So, when creating a file system, make sure
|
||||
it has enough space to fit your needs.
|
||||
* The file system used should not be created as a ``replication target`` and
|
||||
should be mounted.
|
||||
* It is possible to configure HNAS drivers to use distinct EVSs and file
|
||||
systems, but ``all compute nodes and controllers`` in the cloud must have
|
||||
access to the EVSs.
|
||||
|
||||
2. For NFS:
|
||||
|
||||
* Create NFS exports, choose a path for them (it must be different from
|
||||
``/``) and set the :guilabel: `Show snapshots` option to ``hide and
|
||||
disable access``.
|
||||
* For each export used, set the option ``norootsquash`` in the share
|
||||
``Access configuration`` so Block Storage services can change the
|
||||
permissions of its volumes. For example, ``"* (rw, norootsquash)"``.
|
||||
* Make sure that all computes and controllers have R/W access to the
|
||||
shares used by cinder HNAS driver.
|
||||
* In order to use the hardware accelerated features of HNAS NFS, we
|
||||
recommend setting ``max-nfs-version`` to 3. Refer to Hitachi NAS Platform
|
||||
command line reference to see how to configure this option.
|
||||
|
||||
Block Storage host requirements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack
|
||||
Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack.
|
||||
The following packages must be installed in all compute, controller and
|
||||
storage (if any) nodes:
|
||||
|
||||
* ``nfs-utils`` for Red Hat Enterprise Linux OpenStack Platform
|
||||
* ``nfs-client`` for SUSE OpenStack Cloud
|
||||
* ``nfs-common``, ``libc6-i386`` for Ubuntu OpenStack
|
||||
|
||||
Package installation
|
||||
--------------------
|
||||
|
||||
If you are installing the driver from an RPM or DEB package,
|
||||
follow the steps below:
|
||||
|
||||
#. Install the dependencies:
|
||||
|
||||
In Red Hat:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install nfs-utils nfs-utils-lib
|
||||
|
||||
Or in Ubuntu:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install nfs-common
|
||||
|
||||
Or in SUSE:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install nfs-client
|
||||
|
||||
If you are using Ubuntu 12.04, you also need to install ``libc6-i386``
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install libc6-i386
|
||||
|
||||
#. Configure the driver as described in the :ref:`hnas-driver-configuration`
|
||||
section.
|
||||
|
||||
#. Restart all Block Storage services (volume, scheduler, and backup).
|
||||
|
||||
.. _hnas-driver-configuration:
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
HNAS supports a variety of storage options and file system capabilities,
|
||||
which are selected through the definition of volume types combined with the
|
||||
use of multiple back ends and multiple services. Each back end can configure
|
||||
up to ``4 service pools``, which can be mapped to cinder volume types.
|
||||
|
||||
The configuration for the driver is read from the back-end sections of the
|
||||
``cinder.conf``. Each back-end section must have the appropriate configurations
|
||||
to communicate with your HNAS back end, such as the IP address of the HNAS EVS
|
||||
that is hosting your data, HNAS SSH access credentials, the configuration of
|
||||
each of the services in that back end, and so on. You can find examples of such
|
||||
configurations in the :ref:`configuration_example` section.
|
||||
|
||||
.. note::
|
||||
HNAS cinder drivers still support the XML configuration the
|
||||
same way it was in the older versions, but we recommend configuring the
|
||||
HNAS cinder drivers only through the ``cinder.conf`` file,
|
||||
since the XML configuration file from previous versions is being
|
||||
deprecated as of Newton Release.
|
||||
|
||||
.. note::
|
||||
We do not recommend the use of the same NFS export for different back ends.
|
||||
If possible, configure each back end to
|
||||
use a different NFS export/file system.
|
||||
|
||||
The following is the definition of each configuration option that can be used
|
||||
in a HNAS back-end section in the ``cinder.conf`` file:
|
||||
|
||||
.. list-table:: **Configuration options in cinder.conf**
|
||||
:header-rows: 1
|
||||
:widths: 25, 10, 15, 50
|
||||
|
||||
* - Option
|
||||
- Type
|
||||
- Default
|
||||
- Description
|
||||
* - ``volume_backend_name``
|
||||
- Optional
|
||||
- N/A
|
||||
- A name that identifies the back end and can be used as an extra-spec to
|
||||
redirect the volumes to the referenced back end.
|
||||
* - ``volume_driver``
|
||||
- Required
|
||||
- N/A
|
||||
- The python module path to the HNAS volume driver python class. When
|
||||
installing through the rpm or deb packages, you should configure this
|
||||
to `cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver`.
|
||||
* - ``nfs_shares_config``
|
||||
- Required (only for NFS)
|
||||
- /etc/cinder/nfs_shares
|
||||
- Path to the ``nfs_shares`` file. This is required by the base cinder
|
||||
generic NFS driver and therefore also required by the HNAS NFS driver.
|
||||
This file should list, one per line, every NFS share being used by the
|
||||
back end. For example, all the values found in the configuration keys
|
||||
hnas_svcX_hdp in the HNAS NFS back-end sections.
|
||||
* - ``hnas_mgmt_ip0``
|
||||
- Required
|
||||
- N/A
|
||||
- HNAS management IP address. Should be the IP address of the `Admin`
|
||||
EVS. It is also the IP through which you access the web SMU
|
||||
administration frontend of HNAS.
|
||||
* - ``hnas_username``
|
||||
- Required
|
||||
- N/A
|
||||
- HNAS SSH username
|
||||
* - ``hds_hnas_nfs_config_file``
|
||||
- Optional (deprecated)
|
||||
- /opt/hds/hnas/cinder_nfs_conf.xml
|
||||
- Path to the deprecated XML configuration file (only required if using
|
||||
the XML file)
|
||||
* - ``hnas_cluster_admin_ip0``
|
||||
- Optional (required only for HNAS multi-farm setups)
|
||||
- N/A
|
||||
- The IP of the HNAS farm admin. If your SMU controls more than one
|
||||
system or cluster, this option must be set with the IP of the desired
|
||||
node. This is different for HNAS multi-cluster setups, which
|
||||
does not require this option to be set.
|
||||
* - ``hnas_ssh_private_key``
|
||||
- Optional
|
||||
- N/A
|
||||
- Path to the SSH private key used to authenticate to the HNAS SMU. Only
|
||||
required if you do not want to set `hnas_password`.
|
||||
* - ``hnas_ssh_port``
|
||||
- Optional
|
||||
- 22
|
||||
- Port on which HNAS is listening for SSH connections
|
||||
* - ``hnas_password``
|
||||
- Required (unless hnas_ssh_private_key is provided)
|
||||
- N/A
|
||||
- HNAS password
|
||||
* - ``hnas_svcX_hdp`` [1]_
|
||||
- Required (at least 1)
|
||||
- N/A
|
||||
- HDP (export) where the volumes will be created. Use
|
||||
exports paths to configure this.
|
||||
* - ``hnas_svcX_pool_name``
|
||||
- Required
|
||||
- N/A
|
||||
- A `unique string` that is used to refer to this pool within the
|
||||
context of cinder. You can tell cinder to put volumes of a specific
|
||||
volume type into this back end, within this pool. See,
|
||||
``Service Labels`` and :ref:`configuration_example` sections
|
||||
for more details.
|
||||
|
||||
.. [1]
|
||||
Replace X with a number from 0 to 3 (keep the sequence when configuring
|
||||
the driver)
|
||||
|
||||
Service labels
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
HNAS driver supports differentiated types of service using the service labels.
|
||||
It is possible to create up to 4 types of them for each back end. (For example
|
||||
gold, platinum, silver, ssd, and so on).
|
||||
|
||||
After creating the services in the ``cinder.conf`` configuration file, you
|
||||
need to configure one cinder ``volume_type`` per service. Each ``volume_type``
|
||||
must have the metadata service_label with the same name configured in the
|
||||
``hnas_svcX_pool_name option`` of that service. See the
|
||||
:ref:`configuration_example` section for more details. If the ``volume_type``
|
||||
is not set, the cinder service pool with largest available free space or
|
||||
other criteria configured in scheduler filters.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create default
|
||||
$ openstack volume type set --property service_label=default default
|
||||
$ openstack volume type create platinum-tier
|
||||
$ openstack volume type set --property service_label=platinum platinum
|
||||
|
||||
Multi-backend configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can deploy multiple OpenStack HNAS Driver instances (back ends) that each
|
||||
controls a separate HNAS or a single HNAS. If you use multiple cinder
|
||||
back ends, remember that each cinder back end can host up to 4 services. Each
|
||||
back-end section must have the appropriate configurations to communicate with
|
||||
your HNAS back end, such as the IP address of the HNAS EVS that is hosting
|
||||
your data, HNAS SSH access credentials, the configuration of each of the
|
||||
services in that back end, and so on. You can find examples of such
|
||||
configurations in the :ref:`configuration_example` section.
|
||||
|
||||
If you want the volumes from a volume_type to be casted into a specific
|
||||
back end, you must configure an extra_spec in the ``volume_type`` with the
|
||||
value of the ``volume_backend_name`` option from that back end.
|
||||
|
||||
For multiple NFS back ends configuration, each back end should have a
|
||||
separated ``nfs_shares_config`` and also a separated ``nfs_shares file``
|
||||
defined (For example, ``nfs_shares1``, ``nfs_shares2``) with the desired
|
||||
shares listed in separated lines.
|
||||
|
||||
SSH configuration
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. note::
|
||||
As of the Newton OpenStack release, the user can no longer run the
|
||||
driver using a locally installed instance of the :command:`SSC` utility
|
||||
package. Instead, all communications with the HNAS back end are handled
|
||||
through :command:`SSH`.
|
||||
|
||||
You can use your username and password to authenticate the Block Storage node
|
||||
to the HNAS back end. In order to do that, simply configure ``hnas_username``
|
||||
and ``hnas_password`` in your back end section within the ``cinder.conf``
|
||||
file.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[hnas-backend]
|
||||
# ...
|
||||
hnas_username = supervisor
|
||||
hnas_password = supervisor
|
||||
|
||||
Alternatively, the HNAS cinder driver also supports SSH authentication
|
||||
through public key. To configure that:
|
||||
|
||||
#. If you do not have a pair of public keys already generated, create it in
|
||||
the Block Storage node (leave the pass-phrase empty):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mkdir -p /opt/hitachi/ssh
|
||||
$ ssh-keygen -f /opt/hds/ssh/hnaskey
|
||||
|
||||
#. Change the owner of the key to cinder (or the user the volume service will
|
||||
be run as):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chown -R cinder.cinder /opt/hitachi/ssh
|
||||
|
||||
#. Create the directory ``ssh_keys`` in the SMU server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
|
||||
|
||||
#. Copy the public key to the ``ssh_keys`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
|
||||
|
||||
#. Access the SMU server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh [manager|supervisor]@<smu-ip>
|
||||
|
||||
#. Run the command to register the SSH keys:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
|
||||
|
||||
#. Check the communication with HNAS in the Block Storage node:
|
||||
|
||||
For multi-farm HNAS:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
|
||||
|
||||
Or, for Single-node/Multi-Cluster:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc localhost df -a'
|
||||
|
||||
#. Configure your backend section in ``cinder.conf`` to use your public key:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[hnas-backend]
|
||||
# ...
|
||||
hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey
|
||||
|
||||
Managing volumes
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
If there are some existing volumes on HNAS that you want to import to cinder,
|
||||
it is possible to use the manage volume feature to do this. The manage action
|
||||
on an existing volume is very similar to a volume creation. It creates a
|
||||
volume entry on cinder database, but instead of creating a new volume in the
|
||||
back end, it only adds a link to an existing volume.
|
||||
|
||||
.. note::
|
||||
It is an admin only feature and you have to be logged as an user
|
||||
with admin rights to be able to use this.
|
||||
|
||||
#. Under the :menuselection:`System > Volumes` tab,
|
||||
choose the option :guilabel:`Manage Volume`.
|
||||
|
||||
#. Fill the fields :guilabel:`Identifier`, :guilabel:`Host`,
|
||||
:guilabel:`Volume Name`, and :guilabel:`Volume Type` with volume
|
||||
information to be managed:
|
||||
|
||||
* :guilabel:`Identifier`: ip:/type/volume_name (*For example:*
|
||||
172.24.44.34:/silver/volume-test)
|
||||
* :guilabel:`Host`: `host@backend-name#pool_name` (*For example:*
|
||||
`ubuntu@hnas-nfs#test_silver`)
|
||||
* :guilabel:`Volume Name`: volume_name (*For example:* volume-test)
|
||||
* :guilabel:`Volume Type`: choose a type of volume (*For example:* silver)
|
||||
|
||||
By CLI:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder manage [--id-type <id-type>][--name <name>][--description <description>]
|
||||
[--volume-type <volume-type>][--availability-zone <availability-zone>]
|
||||
[--metadata [<key=value> [<key=value> ...]]][--bootable] <host> <identifier>
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder manage --name volume-test --volume-type silver
|
||||
ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test
|
||||
|
||||
Managing snapshots
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The manage snapshots feature works very similarly to the manage volumes
|
||||
feature, currently supported on HNAS cinder drivers. So, if you have a volume
|
||||
already managed by cinder which has snapshots that are not managed by cinder,
|
||||
it is possible to use manage snapshots to import these snapshots and link them
|
||||
with their original volume.
|
||||
|
||||
.. note::
|
||||
For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes
|
||||
that were created using :command:`file-clone-create`, not the HNAS
|
||||
:command:`snapshot-\*` feature. Check the HNAS users
|
||||
documentation to have details about those 2 features.
|
||||
|
||||
Currently, the manage snapshots function does not support importing snapshots
|
||||
(generally created by storage's :command:`file-clone` operation)
|
||||
``without parent volumes`` or when the parent volume is ``in-use``. In this
|
||||
case, the ``manage volumes`` should be used to import the snapshot as a normal
|
||||
cinder volume.
|
||||
|
||||
Also, it is an admin only feature and you have to be logged as a user with
|
||||
admin rights to be able to use this.
|
||||
|
||||
.. note::
|
||||
Although there is a verification to prevent importing snapshots using
|
||||
non-related volumes as parents, it is possible to manage a snapshot using
|
||||
any related cloned volume. So, when managing a snapshot, it is extremely
|
||||
important to make sure that you are using the correct parent volume.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder snapshot-manage <volume> <identifier>
|
||||
|
||||
* :guilabel:`Identifier`: evs_ip:/export_name/snapshot_name
|
||||
(*For example:* 172.24.44.34:/export1/snapshot-test)
|
||||
|
||||
* :guilabel:`Volume`: Parent volume ID (*For example:*
|
||||
061028c0-60cf-499f-99e2-2cd6afea081f)
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test
|
||||
|
||||
.. note::
|
||||
This feature is currently available only for HNAS NFS Driver.
|
||||
|
||||
.. _configuration_example:
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Below are configuration examples for NFS backend:
|
||||
|
||||
#. HNAS NFS Driver
|
||||
|
||||
#. For HNAS NFS driver, create this section in your ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[hnas-nfs]
|
||||
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
|
||||
nfs_shares_config = /home/cinder/nfs_shares
|
||||
volume_backend_name = hnas_nfs_backend
|
||||
hnas_username = supervisor
|
||||
hnas_password = supervisor
|
||||
hnas_mgmt_ip0 = 172.24.44.15
|
||||
|
||||
hnas_svc0_pool_name = nfs_gold
|
||||
hnas_svc0_hdp = 172.24.49.21:/gold_export
|
||||
|
||||
hnas_svc1_pool_name = nfs_platinum
|
||||
hnas_svc1_hdp = 172.24.49.21:/silver_platinum
|
||||
|
||||
hnas_svc2_pool_name = nfs_silver
|
||||
hnas_svc2_hdp = 172.24.49.22:/silver_export
|
||||
|
||||
hnas_svc3_pool_name = nfs_bronze
|
||||
hnas_svc3_hdp = 172.24.49.23:/bronze_export
|
||||
|
||||
#. Add it to the ``enabled_backends`` list, under the ``DEFAULT`` section
|
||||
of your ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = hnas-nfs
|
||||
|
||||
#. Add the configured exports to the ``nfs_shares`` file:
|
||||
|
||||
.. code-block:: vim
|
||||
|
||||
172.24.49.21:/gold_export
|
||||
172.24.49.21:/silver_platinum
|
||||
172.24.49.22:/silver_export
|
||||
172.24.49.23:/bronze_export
|
||||
|
||||
#. Register a volume type with cinder and associate it with
|
||||
this backend:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create hnas_nfs_gold
|
||||
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
|
||||
service_label=nfs_gold hnas_nfs_gold
|
||||
$ openstack volume type create hnas_nfs_platinum
|
||||
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
|
||||
service_label=nfs_platinum hnas_nfs_platinum
|
||||
$ openstack volume type create hnas_nfs_silver
|
||||
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
|
||||
service_label=nfs_silver hnas_nfs_silver
|
||||
$ openstack volume type create hnas_nfs_bronze
|
||||
$ openstack volume type set --property volume_backend_name=hnas_nfs_backend \
|
||||
service_label=nfs_bronze hnas_nfs_bronze
|
||||
|
||||
Additional notes and limitations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* The ``get_volume_stats()`` function always provides the available
|
||||
capacity based on the combined sum of all the HDPs that are used in
|
||||
these services labels.
|
||||
|
||||
* After changing the configuration on the storage node, the Block Storage
|
||||
driver must be restarted.
|
||||
|
||||
* On Red Hat, if the system is configured to use SELinux, you need to
|
||||
set ``virt_use_nfs = on`` for NFS driver work properly.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# setsebool -P virt_use_nfs on
|
||||
|
||||
* It is not possible to manage a volume if there is a slash (``/``) or
|
||||
a colon (``:``) in the volume name.
|
||||
|
||||
* File system ``auto-expansion``: Although supported, we do not recommend using
|
||||
file systems with auto-expansion setting enabled because the scheduler uses
|
||||
the file system capacity reported by the driver to determine if new volumes
|
||||
can be created. For instance, in a setup with a file system that can expand
|
||||
to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not
|
||||
allow a 15GB volume to be created. In this case, manual expansion would
|
||||
have to be triggered by an administrator. We recommend always creating the
|
||||
file system at the ``maximum capacity`` or periodically expanding the file
|
||||
system manually.
|
||||
|
||||
* The ``hnas_svcX_pool_name`` option must be unique for a given back end. It
|
||||
is still possible to use the deprecated form ``hnas_svcX_volume_type``, but
|
||||
this support will be removed in a future release.
|
||||
|
||||
* SSC simultaneous connections limit: In very busy environments, if 2 or
|
||||
more volume hosts are configured to use the same storage, some requests
|
||||
(create, delete and so on) can have some attempts failed and re-tried (
|
||||
``5 attempts`` by default) due to an HNAS connection limitation (
|
||||
``max of 5`` simultaneous connections).
|
@ -0,0 +1,169 @@
|
||||
=============================
|
||||
Hitachi storage volume driver
|
||||
=============================
|
||||
|
||||
Hitachi storage volume driver provides iSCSI and Fibre Channel
|
||||
support for Hitachi storages.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Supported storages:
|
||||
|
||||
* Hitachi Virtual Storage Platform G1000 (VSP G1000)
|
||||
* Hitachi Virtual Storage Platform (VSP)
|
||||
* Hitachi Unified Storage VM (HUS VM)
|
||||
* Hitachi Unified Storage 100 Family (HUS 100 Family)
|
||||
|
||||
Required software:
|
||||
|
||||
* RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
|
||||
* Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later
|
||||
for HUS 100 Family
|
||||
|
||||
.. note::
|
||||
|
||||
HSNM2 needs to be installed under ``/usr/stonavm``.
|
||||
|
||||
Required licenses:
|
||||
|
||||
* Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
|
||||
* (Mandatory) ShadowImage in-system replication for HUS 100 Family
|
||||
* (Optional) Copy-on-Write Snapshot for HUS 100 Family
|
||||
|
||||
Additionally, the ``pexpect`` package is required.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Manage and unmanage volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy a volume to an image.
|
||||
* Copy an image to a volume.
|
||||
* Clone a volume.
|
||||
* Extend a volume.
|
||||
* Get volume statistics.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Set up Hitachi storage
|
||||
----------------------
|
||||
|
||||
You need to specify settings as described below. For details about each step,
|
||||
see the user's guide of the storage device. Use a storage administrative
|
||||
software such as ``Storage Navigator`` to set up the storage device so that
|
||||
LDEVs and host groups can be created and deleted, and LDEVs can be connected
|
||||
to the server and can be asynchronously copied.
|
||||
|
||||
#. Create a Dynamic Provisioning pool.
|
||||
|
||||
#. Connect the ports at the storage to the controller node and compute nodes.
|
||||
|
||||
#. For VSP G1000/VSP/HUS VM, set ``port security`` to ``enable`` for the
|
||||
ports at the storage.
|
||||
|
||||
#. For HUS 100 Family, set ``Host Group security`` or
|
||||
``iSCSI target security`` to ``ON`` for the ports at the storage.
|
||||
|
||||
#. For the ports at the storage, create host groups (iSCSI targets) whose
|
||||
names begin with HBSD- for the controller node and each compute node.
|
||||
Then register a WWN (initiator IQN) for each of the controller node and
|
||||
compute nodes.
|
||||
|
||||
#. For VSP G1000/VSP/HUS VM, perform the following:
|
||||
|
||||
* Create a storage device account belonging to the Administrator User
|
||||
Group. (To use multiple storage devices, create the same account name
|
||||
for all the target storage devices, and specify the same resource
|
||||
group and permissions.)
|
||||
* Create a command device (In-Band), and set user authentication to ``ON``.
|
||||
* Register the created command device to the host group for the controller
|
||||
node.
|
||||
* To use the Thin Image function, create a pool for Thin Image.
|
||||
|
||||
#. For HUS 100 Family, perform the following:
|
||||
|
||||
* Use the :command:`auunitaddauto` command to register the
|
||||
unit name and controller of the storage device to HSNM2.
|
||||
* When connecting via iSCSI, if you are using CHAP certification, specify
|
||||
the same user and password as that used for the storage port.
|
||||
|
||||
Set up Hitachi Gigabit Fibre Channel adaptor
|
||||
--------------------------------------------
|
||||
|
||||
Change a parameter of the hfcldd driver and update the ``initram`` file
|
||||
if Hitachi Gigabit Fibre Channel adaptor is used:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
|
||||
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
|
||||
# reboot
|
||||
|
||||
Set up Hitachi storage volume driver
|
||||
------------------------------------
|
||||
|
||||
#. Create a directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir /var/lock/hbsd
|
||||
# chown cinder:cinder /var/lock/hbsd
|
||||
|
||||
#. Create ``volume type`` and ``volume key``.
|
||||
|
||||
This example shows that HUS100_SAMPLE is created as ``volume type``
|
||||
and hus100_backend is registered as ``volume key``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create HUS100_SAMPLE
|
||||
$ openstack volume type set --property volume_backend_name=hus100_backend HUS100_SAMPLE
|
||||
|
||||
#. Specify any identical ``volume type`` name and ``volume key``.
|
||||
|
||||
To confirm the created ``volume type``, please execute the following
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type list --long
|
||||
|
||||
#. Edit the ``/etc/cinder/cinder.conf`` file as follows.
|
||||
|
||||
If you use Fibre Channel:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
|
||||
|
||||
If you use iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
|
||||
|
||||
Also, set ``volume_backend_name`` created by :command:`openstack volume type set`
|
||||
command:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_backend_name = hus100_backend
|
||||
|
||||
This table shows configuration options for Hitachi storage volume driver.
|
||||
|
||||
.. include:: ../../tables/cinder-hitachi-hbsd.rst
|
||||
|
||||
#. Restart the Block Storage service.
|
||||
|
||||
When the startup is done, "MSGID0003-I: The storage backend can be used."
|
||||
is output into ``/var/log/cinder/volume.log`` as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi.
|
||||
hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None]
|
||||
MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
|
@ -0,0 +1,165 @@
|
||||
======================================
|
||||
HP MSA Fibre Channel and iSCSI drivers
|
||||
======================================
|
||||
|
||||
The ``HPMSAFCDriver`` and ``HPMSAISCSIDriver`` Cinder drivers allow HP MSA
|
||||
2040 or 1040 arrays to be used for Block Storage in OpenStack deployments.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the HP MSA drivers, the following are required:
|
||||
|
||||
- HP MSA 2040 or 1040 array with:
|
||||
|
||||
- iSCSI or FC host interfaces
|
||||
- G22x firmware or later
|
||||
|
||||
- Network connectivity between the OpenStack host and the array management
|
||||
interfaces
|
||||
|
||||
- HTTPS or HTTP must be enabled on the array
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Migrate a volume with back-end assistance.
|
||||
- Retype a volume.
|
||||
- Manage and unmanage a volume.
|
||||
|
||||
Configuring the array
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Verify that the array can be managed via an HTTPS connection. HTTP can also
|
||||
be used if ``hpmsa_api_protocol=http`` is placed into the appropriate
|
||||
sections of the ``cinder.conf`` file.
|
||||
|
||||
Confirm that virtual pools A and B are present if you plan to use virtual
|
||||
pools for OpenStack storage.
|
||||
|
||||
If you plan to use vdisks instead of virtual pools, create or identify one
|
||||
or more vdisks to be used for OpenStack storage; typically this will mean
|
||||
creating or setting aside one disk group for each of the A and B
|
||||
controllers.
|
||||
|
||||
#. Edit the ``cinder.conf`` file to define a storage back end entry for each
|
||||
storage pool on the array that will be managed by OpenStack. Each entry
|
||||
consists of a unique section name, surrounded by square brackets, followed
|
||||
by options specified in a ``key=value`` format.
|
||||
|
||||
* The ``hpmsa_backend_name`` value specifies the name of the storage pool
|
||||
or vdisk on the array.
|
||||
|
||||
* The ``volume_backend_name`` option value can be a unique value, if you
|
||||
wish to be able to assign volumes to a specific storage pool on the
|
||||
array, or a name that is shared among multiple storage pools to let the
|
||||
volume scheduler choose where new volumes are allocated.
|
||||
|
||||
* The rest of the options will be repeated for each storage pool in a given
|
||||
array: the appropriate Cinder driver name; IP address or host name of the
|
||||
array management interface; the username and password of an array user
|
||||
account with ``manage`` privileges; and the iSCSI IP addresses for the
|
||||
array if using the iSCSI transport protocol.
|
||||
|
||||
In the examples below, two back ends are defined, one for pool A and one for
|
||||
pool B, and a common ``volume_backend_name`` is used so that a single
|
||||
volume type definition can be used to allocate volumes from both pools.
|
||||
|
||||
**iSCSI example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
hpmsa_backend_name = A
|
||||
volume_backend_name = hpmsa-array
|
||||
volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
[pool-b]
|
||||
hpmsa_backend_name = B
|
||||
volume_backend_name = hpmsa-array
|
||||
volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
**Fibre Channel example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
hpmsa_backend_name = A
|
||||
volume_backend_name = hpmsa-array
|
||||
volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
[pool-b]
|
||||
hpmsa_backend_name = B
|
||||
volume_backend_name = hpmsa-array
|
||||
volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
#. If any ``volume_backend_name`` value refers to a vdisk rather than a
|
||||
virtual pool, add an additional statement ``hpmsa_backend_type = linear``
|
||||
to that back end entry.
|
||||
|
||||
#. If HTTPS is not enabled in the array, include ``hpmsa_api_protocol = http``
|
||||
in each of the back-end definitions.
|
||||
|
||||
#. If HTTPS is enabled, you can enable certificate verification with the option
|
||||
``hpmsa_verify_certificate=True``. You may also use the
|
||||
``hpmsa_verify_certificate_path`` parameter to specify the path to a
|
||||
CA\_BUNDLE file containing CAs other than those in the default list.
|
||||
|
||||
#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an
|
||||
``enabled_back-ends`` parameter specifying the backend entries you added,
|
||||
and a ``default_volume_type`` parameter specifying the name of a volume type
|
||||
that you will create in the next step.
|
||||
|
||||
**Example of [DEFAULT] section changes**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = pool-a,pool-b
|
||||
default_volume_type = hpmsa
|
||||
|
||||
|
||||
#. Create a new volume type for each distinct ``volume_backend_name`` value
|
||||
that you added in the ``cinder.conf`` file. The example below assumes that
|
||||
the same ``volume_backend_name=hpmsa-array`` option was specified in all
|
||||
of the entries, and specifies that the volume type ``hpmsa`` can be used to
|
||||
allocate volumes from any of them.
|
||||
|
||||
**Example of creating a volume type**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create hpmsa
|
||||
$ openstack volume type set --property volume_backend_name=hpmsa-array hpmsa
|
||||
|
||||
#. After modifying the ``cinder.conf`` file, restart the ``cinder-volume``
|
||||
service.
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific to
|
||||
the HP MSA drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-hpmsa.rst
|
@ -0,0 +1,384 @@
|
||||
========================================
|
||||
HPE 3PAR Fibre Channel and iSCSI drivers
|
||||
========================================
|
||||
|
||||
The ``HPE3PARFCDriver`` and ``HPE3PARISCSIDriver`` drivers, which are based on
|
||||
the Block Storage service (Cinder) plug-in architecture, run volume operations
|
||||
by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH
|
||||
connections. The HTTP and HTTPS communications use ``python-3parclient``,
|
||||
which is part of the Python standard library.
|
||||
|
||||
For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR
|
||||
user documentation.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the HPE 3PAR drivers, install the following software and components on
|
||||
the HPE 3PAR storage system:
|
||||
|
||||
* HPE 3PAR Operating System software version 3.1.3 MU1 or higher.
|
||||
|
||||
* Deduplication provisioning requires SSD disks and HPE 3PAR Operating
|
||||
System software version 3.2.1 MU1 or higher.
|
||||
|
||||
* Enabling Flash Cache Policy requires the following:
|
||||
|
||||
* Array must contain SSD disks.
|
||||
|
||||
* HPE 3PAR Operating System software version 3.2.1 MU2 or higher.
|
||||
|
||||
* python-3parclient version 4.2.0 or newer.
|
||||
|
||||
* Array must have the Adaptive Flash Cache license installed.
|
||||
|
||||
* Flash Cache must be enabled on the array with the CLI command
|
||||
:command:`createflashcache SIZE`, where size must be in 16 GB increments.
|
||||
For example, :command:`createflashcache 128g` will create 128 GB of Flash
|
||||
Cache for each node pair in the array.
|
||||
|
||||
* The Dynamic Optimization license is required to support any feature that
|
||||
results in a volume changing provisioning type or CPG. This may apply to
|
||||
the volume :command:`migrate`, :command:`retype` and :command:`manage`
|
||||
commands.
|
||||
|
||||
* The Virtual Copy License is required to support any feature that involves
|
||||
volume snapshots. This applies to the volume :command:`snapshot-*`
|
||||
commands.
|
||||
|
||||
* HPE 3PAR drivers will now check the licenses installed on the array and
|
||||
disable driver capabilities based on available licenses. This will apply to
|
||||
thin provisioning, QoS support and volume replication.
|
||||
|
||||
* HPE 3PAR Web Services API Server must be enabled and running.
|
||||
|
||||
* One Common Provisioning Group (CPG).
|
||||
|
||||
* Additionally, you must install the ``python-3parclient`` version 4.2.0 or
|
||||
newer from the Python standard library on the system with the enabled Block
|
||||
Storage service volume drivers.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Migrate a volume with back-end assistance.
|
||||
|
||||
* Retype a volume.
|
||||
|
||||
* Manage and unmanage a volume.
|
||||
|
||||
* Manage and unmanage a snapshot.
|
||||
|
||||
* Replicate host volumes.
|
||||
|
||||
* Fail-over host volumes.
|
||||
|
||||
* Fail-back host volumes.
|
||||
|
||||
* Create, delete, update, snapshot, and clone consistency groups.
|
||||
|
||||
* Create and delete consistency group snapshots.
|
||||
|
||||
* Create a consistency group from a consistency group snapshot or another
|
||||
group.
|
||||
|
||||
Volume type support for both HPE 3PAR drivers includes the ability to set the
|
||||
following capabilities in the OpenStack Block Storage API
|
||||
``cinder.api.contrib.types_extra_specs`` volume type extra specs extension
|
||||
module:
|
||||
|
||||
* ``hpe3par:snap_cpg``
|
||||
|
||||
* ``hpe3par:provisioning``
|
||||
|
||||
* ``hpe3par:persona``
|
||||
|
||||
* ``hpe3par:vvs``
|
||||
|
||||
* ``hpe3par:flash_cache``
|
||||
|
||||
To work with the default filter scheduler, the key values are case sensitive
|
||||
and scoped with ``hpe3par:``. For information about how to set the key-value
|
||||
pairs and associate them with a volume type, run the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help volume type
|
||||
|
||||
.. note::
|
||||
|
||||
Volumes that are cloned only support the extra specs keys cpg, snap_cpg,
|
||||
provisioning and vvs. The others are ignored. In addition the comments
|
||||
section of the cloned volume in the HPE 3PAR StoreServ storage array is
|
||||
not populated.
|
||||
|
||||
If volume types are not used or a particular key is not set for a volume type,
|
||||
the following defaults are used:
|
||||
|
||||
* ``hpe3par:cpg`` - Defaults to the ``hpe3par_cpg`` setting in the
|
||||
``cinder.conf`` file.
|
||||
|
||||
* ``hpe3par:snap_cpg`` - Defaults to the ``hpe3par_snap`` setting in
|
||||
the ``cinder.conf`` file. If ``hpe3par_snap`` is not set, it defaults
|
||||
to the ``hpe3par_cpg`` setting.
|
||||
|
||||
* ``hpe3par:provisioning`` - Defaults to ``thin`` provisioning, the valid
|
||||
values are ``thin``, ``full``, and ``dedup``.
|
||||
|
||||
* ``hpe3par:persona`` - Defaults to the ``2 - Generic-ALUA`` persona. The
|
||||
valid values are:
|
||||
|
||||
* ``1 - Generic``
|
||||
* ``2 - Generic-ALUA``
|
||||
* ``3 - Generic-legacy``
|
||||
* ``4 - HPUX-legacy``
|
||||
* ``5 - AIX-legacy``
|
||||
* ``6 - EGENERA``
|
||||
* ``7 - ONTAP-legacy``
|
||||
* ``8 - VMware``
|
||||
* ``9 - OpenVMS``
|
||||
* ``10 - HPUX``
|
||||
* ``11 - WindowsServer``
|
||||
|
||||
* ``hpe3par:flash_cache`` - Defaults to ``false``, the valid values are
|
||||
``true`` and ``false``.
|
||||
|
||||
QoS support for both HPE 3PAR drivers includes the ability to set the
|
||||
following capabilities in the OpenStack Block Storage API
|
||||
``cinder.api.contrib.qos_specs_manage`` qos specs extension module:
|
||||
|
||||
* ``minBWS``
|
||||
|
||||
* ``maxBWS``
|
||||
|
||||
* ``minIOPS``
|
||||
|
||||
* ``maxIOPS``
|
||||
|
||||
* ``latency``
|
||||
|
||||
* ``priority``
|
||||
|
||||
The qos keys above no longer require to be scoped but must be created and
|
||||
associated to a volume type. For information about how to set the key-value
|
||||
pairs and associate them with a volume type, run the following commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help volume qos
|
||||
|
||||
The following keys require that the HPE 3PAR StoreServ storage array has a
|
||||
Priority Optimization license installed.
|
||||
|
||||
``hpe3par:vvs``
|
||||
The virtual volume set name that has been predefined by the Administrator
|
||||
with quality of service (QoS) rules associated to it. If you specify
|
||||
extra_specs ``hpe3par:vvs``, the qos_specs ``minIOPS``, ``maxIOPS``,
|
||||
``minBWS``, and ``maxBWS`` settings are ignored.
|
||||
|
||||
``minBWS``
|
||||
The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue
|
||||
bandwidth rate has no minimum goal.
|
||||
|
||||
``maxBWS``
|
||||
The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue
|
||||
bandwidth rate has no limit.
|
||||
|
||||
``minIOPS``
|
||||
The QoS I/O issue count minimum goal. If not set, the I/O issue count has no
|
||||
minimum goal.
|
||||
|
||||
``maxIOPS``
|
||||
The QoS I/O issue count rate limit. If not set, the I/O issue count rate has
|
||||
no limit.
|
||||
|
||||
``latency``
|
||||
The latency goal in milliseconds.
|
||||
|
||||
``priority``
|
||||
The priority of the QoS rule over other rules. If not set, the priority is
|
||||
``normal``, valid values are ``low``, ``normal`` and ``high``.
|
||||
|
||||
.. note::
|
||||
|
||||
Since the Icehouse release, minIOPS and maxIOPS must be used together to
|
||||
set I/O limits. Similarly, minBWS and maxBWS must be used together. If only
|
||||
one is set the other will be set to the same value.
|
||||
|
||||
The following key requires that the HPE 3PAR StoreServ storage array has an
|
||||
Adaptive Flash Cache license installed.
|
||||
|
||||
* ``hpe3par:flash_cache`` - The flash-cache policy, which can be turned on and
|
||||
off by setting the value to ``true`` or ``false``.
|
||||
|
||||
LDAP and AD authentication is now supported in the HPE 3PAR driver.
|
||||
|
||||
The 3PAR back end must be properly configured for LDAP and AD authentication
|
||||
prior to configuring the volume driver. For details on setting up LDAP with
|
||||
3PAR, see the 3PAR user guide.
|
||||
|
||||
Once configured, ``hpe3par_username`` and ``hpe3par_password`` parameters in
|
||||
``cinder.conf`` can be used with LDAP and AD credentials.
|
||||
|
||||
Enable the HPE 3PAR Fibre Channel and iSCSI drivers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``HPE3PARFCDriver`` and ``HPE3PARISCSIDriver`` are installed with the
|
||||
OpenStack software.
|
||||
|
||||
#. Install the ``python-3parclient`` Python package on the OpenStack Block
|
||||
Storage system.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install 'python-3parclient>=4.0,<5.0'
|
||||
|
||||
|
||||
#. Verify that the HPE 3PAR Web Services API server is enabled and running on
|
||||
the HPE 3PAR storage system.
|
||||
|
||||
a. Log onto the HP 3PAR storage system with administrator access.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh 3paradm@<HP 3PAR IP Address>
|
||||
|
||||
b. View the current state of the Web Services API Server.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ showwsapi
|
||||
-Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
|
||||
Enabled Active Enabled 8008 Enabled 8080 1.1
|
||||
|
||||
c. If the Web Services API Server is disabled, start it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ startwsapi
|
||||
|
||||
#. If the HTTP or HTTPS state is disabled, enable one of them.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ setwsapi -http enable
|
||||
|
||||
or
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ setwsapi -https enable
|
||||
|
||||
.. note::
|
||||
|
||||
To stop the Web Services API Server, use the :command:`stopwsapi` command. For
|
||||
other options run the :command:`setwsapi –h` command.
|
||||
|
||||
#. If you are not using an existing CPG, create a CPG on the HPE 3PAR storage
|
||||
system to be used as the default location for creating volumes.
|
||||
|
||||
#. Make the following changes in the ``/etc/cinder/cinder.conf`` file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# 3PAR WS API Server URL
|
||||
hpe3par_api_url=https://10.10.0.141:8080/api/v1
|
||||
|
||||
# 3PAR username with the 'edit' role
|
||||
hpe3par_username=edit3par
|
||||
|
||||
# 3PAR password for the user specified in hpe3par_username
|
||||
hpe3par_password=3parpass
|
||||
|
||||
# 3PAR CPG to use for volume creation
|
||||
hpe3par_cpg=OpenStackCPG_RAID5_NL
|
||||
|
||||
# IP address of SAN controller for SSH access to the array
|
||||
san_ip=10.10.22.241
|
||||
|
||||
# Username for SAN controller for SSH access to the array
|
||||
san_login=3paradm
|
||||
|
||||
# Password for SAN controller for SSH access to the array
|
||||
san_password=3parpass
|
||||
|
||||
# FIBRE CHANNEL(uncomment the next line to enable the FC driver)
|
||||
# volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
|
||||
|
||||
# iSCSI (uncomment the next line to enable the iSCSI driver and
|
||||
# hpe3par_iscsi_ips or iscsi_ip_address)
|
||||
#volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
|
||||
|
||||
# iSCSI multiple port configuration
|
||||
# hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
|
||||
|
||||
# Still available for single port iSCSI configuration
|
||||
#iscsi_ip_address=10.10.220.253
|
||||
|
||||
|
||||
# Enable HTTP debugging to 3PAR
|
||||
hpe3par_debug=False
|
||||
|
||||
# Enable CHAP authentication for iSCSI connections.
|
||||
hpe3par_iscsi_chap_enabled=false
|
||||
|
||||
# The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be
|
||||
# used.
|
||||
hpe3par_snap_cpg=OpenStackSNAP_CPG
|
||||
|
||||
# Time in hours to retain a snapshot. You can't delete it before this
|
||||
# expires.
|
||||
hpe3par_snapshot_retention=48
|
||||
|
||||
# Time in hours when a snapshot expires and is deleted. This must be
|
||||
# larger than retention.
|
||||
hpe3par_snapshot_expiration=72
|
||||
|
||||
# The ratio of oversubscription when thin provisioned volumes are
|
||||
# involved. Default ratio is 20.0, this means that a provisioned
|
||||
# capacity can be 20 times of the total physical capacity.
|
||||
max_over_subscription_ratio=20.0
|
||||
|
||||
# This flag represents the percentage of reserved back-end capacity.
|
||||
reserved_percentage=15
|
||||
|
||||
.. note::
|
||||
|
||||
You can enable only one driver on each cinder instance unless you enable
|
||||
multiple back-end support. See the Cinder multiple back-end support
|
||||
instructions to enable this feature.
|
||||
|
||||
.. note::
|
||||
|
||||
You can configure one or more iSCSI addresses by using the
|
||||
``hpe3par_iscsi_ips`` option. Separate multiple IP addresses with a
|
||||
comma (``,``). When you configure multiple addresses, the driver selects
|
||||
the iSCSI port with the fewest active volumes at attach time. The 3PAR
|
||||
array does not allow the default port 3260 to be changed, so IP ports
|
||||
need not be specified.
|
||||
|
||||
#. Save the changes to the ``cinder.conf`` file and restart the cinder-volume
|
||||
service.
|
||||
|
||||
The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your
|
||||
OpenStack system. If you experience problems, review the Block Storage
|
||||
service log files for errors.
|
||||
|
||||
The following table contains all the configuration options supported by
|
||||
the HPE 3PAR Fibre Channel and iSCSI drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-hpe3par.rst
|
@ -0,0 +1,216 @@
|
||||
================================
|
||||
HPE LeftHand/StoreVirtual driver
|
||||
================================
|
||||
|
||||
The ``HPELeftHandISCSIDriver`` is based on the Block Storage service plug-in
|
||||
architecture. Volume operations are run by communicating with the HPE
|
||||
LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS
|
||||
communications use the ``python-lefthandclient``, which is part of the Python
|
||||
standard library.
|
||||
|
||||
The ``HPELeftHandISCSIDriver`` can be configured to run using a REST client to
|
||||
communicate with the array. For performance improvements and new functionality
|
||||
the ``python-lefthandclient`` must be downloaded, and HP LeftHand/StoreVirtual
|
||||
Operating System software version 11.5 or higher is required on the array. To
|
||||
configure the driver in standard mode, see
|
||||
`HPE LeftHand/StoreVirtual REST driver`_.
|
||||
|
||||
For information about how to manage HPE LeftHand/StoreVirtual storage systems,
|
||||
see the HPE LeftHand/StoreVirtual user documentation.
|
||||
|
||||
HPE LeftHand/StoreVirtual REST driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to configure the HPE LeftHand/StoreVirtual Block
|
||||
Storage driver.
|
||||
|
||||
System requirements
|
||||
-------------------
|
||||
|
||||
To use the HPE LeftHand/StoreVirtual driver, do the following:
|
||||
|
||||
* Install LeftHand/StoreVirtual Operating System software version 11.5 or
|
||||
higher on the HPE LeftHand/StoreVirtual storage system.
|
||||
|
||||
* Create a cluster group.
|
||||
|
||||
* Install the ``python-lefthandclient`` version 2.1.0 from the Python Package
|
||||
Index on the system with the enabled Block Storage service
|
||||
volume drivers.
|
||||
|
||||
Supported operations
|
||||
--------------------
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Get volume statistics.
|
||||
|
||||
* Migrate a volume with back-end assistance.
|
||||
|
||||
* Retype a volume.
|
||||
|
||||
* Manage and unmanage a volume.
|
||||
|
||||
* Manage and unmanage a snapshot.
|
||||
|
||||
* Replicate host volumes.
|
||||
|
||||
* Fail-over host volumes.
|
||||
|
||||
* Fail-back host volumes.
|
||||
|
||||
* Create, delete, update, and snapshot consistency groups.
|
||||
|
||||
When you use back end assisted volume migration, both source and destination
|
||||
clusters must be in the same HPE LeftHand/StoreVirtual management group.
|
||||
The HPE LeftHand/StoreVirtual array will use native LeftHand APIs to migrate
|
||||
the volume. The volume cannot be attached or have snapshots to migrate.
|
||||
|
||||
Volume type support for the driver includes the ability to set the
|
||||
following capabilities in the Block Storage API
|
||||
``cinder.api.contrib.types_extra_specs`` volume type extra specs
|
||||
extension module.
|
||||
|
||||
* ``hpelh:provisioning``
|
||||
|
||||
* ``hpelh:ao``
|
||||
|
||||
* ``hpelh:data_pl``
|
||||
|
||||
To work with the default filter scheduler, the key-value pairs are
|
||||
case-sensitive and scoped with ``hpelh:``. For information about how to set
|
||||
the key-value pairs and associate them with a volume type, run the following
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack help volume type
|
||||
|
||||
* The following keys require the HPE LeftHand/StoreVirtual storage
|
||||
array be configured for:
|
||||
|
||||
``hpelh:ao``
|
||||
The HPE LeftHand/StoreVirtual storage array must be configured for
|
||||
Adaptive Optimization.
|
||||
|
||||
``hpelh:data_pl``
|
||||
The HPE LeftHand/StoreVirtual storage array must be able to support the
|
||||
Data Protection level specified by the extra spec.
|
||||
|
||||
* If volume types are not used or a particular key is not set for a volume
|
||||
type, the following defaults are used:
|
||||
|
||||
``hpelh:provisioning``
|
||||
Defaults to ``thin`` provisioning, the valid values are, ``thin`` and
|
||||
``full``
|
||||
|
||||
``hpelh:ao``
|
||||
Defaults to ``true``, the valid values are, ``true`` and ``false``.
|
||||
|
||||
``hpelh:data_pl``
|
||||
Defaults to ``r-0``, Network RAID-0 (None), the valid values are,
|
||||
|
||||
* ``r-0``, Network RAID-0 (None)
|
||||
|
||||
* ``r-5``, Network RAID-5 (Single Parity)
|
||||
|
||||
* ``r-10-2``, Network RAID-10 (2-Way Mirror)
|
||||
|
||||
* ``r-10-3``, Network RAID-10 (3-Way Mirror)
|
||||
|
||||
* ``r-10-4``, Network RAID-10 (4-Way Mirror)
|
||||
|
||||
* ``r-6``, Network RAID-6 (Dual Parity)
|
||||
|
||||
Enable the HPE LeftHand/StoreVirtual iSCSI driver
|
||||
-------------------------------------------------
|
||||
|
||||
The ``HPELeftHandISCSIDriver`` is installed with the OpenStack software.
|
||||
|
||||
#. Install the ``python-lefthandclient`` Python package on the OpenStack Block
|
||||
Storage system.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install 'python-lefthandclient>=2.1,<3.0'
|
||||
|
||||
#. If you are not using an existing cluster, create a cluster on the HPE
|
||||
LeftHand storage system to be used as the cluster for creating volumes.
|
||||
|
||||
#. Make the following changes in the ``/etc/cinder/cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# LeftHand WS API Server URL
|
||||
hpelefthand_api_url=https://10.10.0.141:8081/lhos
|
||||
|
||||
# LeftHand Super user username
|
||||
hpelefthand_username=lhuser
|
||||
|
||||
# LeftHand Super user password
|
||||
hpelefthand_password=lhpass
|
||||
|
||||
# LeftHand cluster to use for volume creation
|
||||
hpelefthand_clustername=ClusterLefthand
|
||||
|
||||
# LeftHand iSCSI driver
|
||||
volume_driver=cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver
|
||||
|
||||
# Should CHAPS authentication be used (default=false)
|
||||
hpelefthand_iscsi_chap_enabled=false
|
||||
|
||||
# Enable HTTP debugging to LeftHand (default=false)
|
||||
hpelefthand_debug=false
|
||||
|
||||
# The ratio of oversubscription when thin provisioned volumes are
|
||||
# involved. Default ratio is 20.0, this means that a provisioned capacity
|
||||
# can be 20 times of the total physical capacity.
|
||||
max_over_subscription_ratio=20.0
|
||||
|
||||
# This flag represents the percentage of reserved back-end capacity.
|
||||
reserved_percentage=15
|
||||
|
||||
You can enable only one driver on each cinder instance unless you enable
|
||||
multiple back end support. See the Cinder multiple back end support
|
||||
instructions to enable this feature.
|
||||
|
||||
If the ``hpelefthand_iscsi_chap_enabled`` is set to ``true``, the driver
|
||||
will associate randomly-generated CHAP secrets with all hosts on the HPE
|
||||
LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets
|
||||
when creating iSCSI connections.
|
||||
|
||||
.. important::
|
||||
|
||||
CHAP secrets are passed from OpenStack Block Storage to Compute in clear
|
||||
text. This communication should be secured to ensure that CHAP secrets
|
||||
are not discovered.
|
||||
|
||||
.. note::
|
||||
|
||||
CHAP secrets are added to existing hosts as well as newly-created ones.
|
||||
If the CHAP option is enabled, hosts will not be able to access the
|
||||
storage without the generated secrets.
|
||||
|
||||
#. Save the changes to the ``cinder.conf`` file and restart the
|
||||
``cinder-volume`` service.
|
||||
|
||||
The HPE LeftHand/StoreVirtual driver is now enabled on your OpenStack system.
|
||||
If you experience problems, review the Block Storage service log files for
|
||||
errors.
|
||||
|
||||
.. note::
|
||||
Previous versions implement a HPE LeftHand/StoreVirtual CLIQ driver that
|
||||
enable the Block Storage service driver configuration in legacy mode. This
|
||||
is removed from Mitaka onwards.
|
@ -0,0 +1,516 @@
|
||||
====================
|
||||
Huawei volume driver
|
||||
====================
|
||||
|
||||
Huawei volume driver can be used to provide functions such as the logical
|
||||
volume and snapshot for virtual machines (VMs) in the OpenStack Block Storage
|
||||
driver that supports iSCSI and Fibre Channel protocols.
|
||||
|
||||
Version mappings
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table describes the version mappings among the Block Storage
|
||||
driver, Huawei storage system and OpenStack:
|
||||
|
||||
.. list-table:: **Version mappings among the Block Storage driver and Huawei
|
||||
storage system**
|
||||
:widths: 30 35
|
||||
:header-rows: 1
|
||||
|
||||
* - Description
|
||||
- Storage System Version
|
||||
* - Create, delete, expand, attach, detach, manage and unmanage volumes
|
||||
|
||||
Create volumes with assigned storage pools
|
||||
|
||||
Create volumes with assigned disk types
|
||||
|
||||
Create, delete and update a consistency group
|
||||
|
||||
Copy an image to a volume
|
||||
|
||||
Copy a volume to an image
|
||||
|
||||
Auto Zoning
|
||||
|
||||
SmartThin
|
||||
|
||||
Volume Migration
|
||||
|
||||
Replication V2.1
|
||||
|
||||
Create, delete, manage, unmanage and backup snapshots
|
||||
|
||||
Create and delete a cgsnapshot
|
||||
- OceanStor T series V2R2 C00/C20/C30
|
||||
|
||||
OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20
|
||||
|
||||
OceanStor 2200V3 V300R005C00
|
||||
|
||||
OceanStor 2600V3 V300R005C00
|
||||
|
||||
OceanStor 18500/18800 V1R1C00/C20/C30 V3R3C00
|
||||
|
||||
OceanStor Dorado V300R001C00
|
||||
|
||||
OceanStor V3 V300R006C00
|
||||
|
||||
OceanStor 2200V3 V300R006C00
|
||||
|
||||
OceanStor 2600V3 V300R006C00
|
||||
* - Clone a volume
|
||||
|
||||
Create volume from snapshot
|
||||
|
||||
Retype
|
||||
|
||||
SmartQoS
|
||||
|
||||
SmartTier
|
||||
|
||||
SmartCache
|
||||
|
||||
Thick
|
||||
- OceanStor T series V2R2 C00/C20/C30
|
||||
|
||||
OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20
|
||||
|
||||
OceanStor 2200V3 V300R005C00
|
||||
|
||||
OceanStor 2600V3 V300R005C00
|
||||
|
||||
OceanStor 18500/18800V1R1C00/C20/C30
|
||||
|
||||
OceanStor V3 V300R006C00
|
||||
|
||||
OceanStor 2200V3 V300R006C00
|
||||
|
||||
OceanStor 2600V3 V300R006C00
|
||||
* - SmartPartition
|
||||
- OceanStor T series V2R2 C00/C20/C30
|
||||
|
||||
OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20
|
||||
|
||||
OceanStor 2600V3 V300R005C00
|
||||
|
||||
OceanStor 18500/18800V1R1C00/C20/C30
|
||||
|
||||
OceanStor V3 V300R006C00
|
||||
|
||||
OceanStor 2600V3 V300R006C00
|
||||
* - Hypermetro
|
||||
|
||||
Hypermetro consistency group
|
||||
- OceanStor V3 V3R3C00/C10/C20
|
||||
|
||||
OceanStor 2600V3 V3R5C00
|
||||
|
||||
OceanStor 18500/18800 V3R3C00
|
||||
|
||||
OceanStor Dorado V300R001C00
|
||||
|
||||
OceanStor V3 V300R006C00
|
||||
|
||||
OceanStor 2600V3 V300R006C00
|
||||
|
||||
Volume driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to configure the Huawei volume driver for either
|
||||
iSCSI storage or Fibre Channel storage.
|
||||
|
||||
**Pre-requisites**
|
||||
|
||||
When creating a volume from image, install the ``multipath`` tool and add the
|
||||
following configuration keys in the ``[DEFAULT]`` configuration group of
|
||||
the ``/etc/cinder/cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_multipath_for_image_xfer = True
|
||||
enforce_multipath_for_image_xfer = True
|
||||
|
||||
To configure the volume driver, follow the steps below:
|
||||
|
||||
#. In ``/etc/cinder``, create a Huawei-customized driver configuration file.
|
||||
The file format is XML.
|
||||
#. Change the name of the driver configuration file based on the site
|
||||
requirements, for example, ``cinder_huawei_conf.xml``.
|
||||
#. Configure parameters in the driver configuration file.
|
||||
|
||||
Each product has its own value for the ``Product`` parameter under the
|
||||
``Storage`` xml block. The full xml file with the appropriate ``Product``
|
||||
parameter is as below:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<config>
|
||||
<Storage>
|
||||
<Product>PRODUCT</Product>
|
||||
<Protocol>PROTOCOL</Protocol>
|
||||
<UserName>xxxxxxxx</UserName>
|
||||
<UserPassword>xxxxxxxx</UserPassword>
|
||||
<RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
|
||||
</Storage>
|
||||
<LUN>
|
||||
<LUNType>xxx</LUNType>
|
||||
<WriteType>xxx</WriteType>
|
||||
<Prefetch Type="xxx" Value="xxx" />
|
||||
<StoragePool>xxx</StoragePool>
|
||||
</LUN>
|
||||
<iSCSI>
|
||||
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
|
||||
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
|
||||
</iSCSI>
|
||||
<Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
|
||||
</config>
|
||||
|
||||
The corresponding ``Product`` values for each product are as below:
|
||||
|
||||
|
||||
* **For T series V2**
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<Product>TV2</Product>
|
||||
|
||||
* **For V3**
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<Product>V3</Product>
|
||||
|
||||
* **For OceanStor 18000 series**
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<Product>18000</Product>
|
||||
|
||||
* **For OceanStor Dorado series**
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<Product>Dorado</Product>
|
||||
|
||||
The ``Protocol`` value to be used is ``iSCSI`` for iSCSI and ``FC`` for
|
||||
Fibre Channel as shown below:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
# For iSCSI
|
||||
<Protocol>iSCSI</Protocol>
|
||||
|
||||
# For Fibre channel
|
||||
<Protocol>FC</Protocol>
|
||||
|
||||
.. note::
|
||||
|
||||
For details about the parameters in the configuration file, see the
|
||||
`Configuration file parameters`_ section.
|
||||
|
||||
#. Configure the ``cinder.conf`` file.
|
||||
|
||||
In the ``[default]`` block of ``/etc/cinder/cinder.conf``,
|
||||
enable the ``VOLUME_BACKEND``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
enabled_backends = VOLUME_BACKEND
|
||||
|
||||
|
||||
Add a new block ``[VOLUME_BACKEND]``, and add the following contents:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[VOLUME_BACKEND]
|
||||
volume_driver = VOLUME_DRIVER
|
||||
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
|
||||
volume_backend_name = Huawei_Storage
|
||||
|
||||
* ``volume_driver`` indicates the loaded driver.
|
||||
|
||||
* ``cinder_huawei_conf_file`` indicates the specified Huawei-customized
|
||||
configuration file.
|
||||
|
||||
* ``volume_backend_name`` indicates the name of the backend.
|
||||
|
||||
Add information about remote devices in ``/etc/cinder/cinder.conf``
|
||||
in target backend block for ``Hypermetro``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[VOLUME_BACKEND]
|
||||
volume_driver = VOLUME_DRIVER
|
||||
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
|
||||
volume_backend_name = Huawei_Storage
|
||||
metro_san_user = xxx
|
||||
metro_san_password = xxx
|
||||
metro_domain_name = xxx
|
||||
metro_san_address = https://x.x.x.x:8088/deviceManager/rest/
|
||||
metro_storage_pools = xxx
|
||||
|
||||
Add information about remote devices in ``/etc/cinder/cinder.conf``
|
||||
in target backend block for ``Replication``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[VOLUME_BACKEND]
|
||||
volume_driver = VOLUME_DRIVER
|
||||
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
|
||||
volume_backend_name = Huawei_Storage
|
||||
replication_device =
|
||||
backend_id: xxx,
|
||||
storage_pool :xxx,
|
||||
san_address: https://x.x.x.x:8088/deviceManager/rest/,
|
||||
san_user: xxx,
|
||||
san_passowrd: xxx,
|
||||
iscsi_default_target_ip: x.x.x.x
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the value for ``Hypermetro`` and ``Replication`` is
|
||||
``None``. For details about the parameters in the configuration file,
|
||||
see the `Configuration file parameters`_ section.
|
||||
|
||||
The ``volume-driver`` value for every product is as below:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# For iSCSI
|
||||
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
|
||||
|
||||
# For FC
|
||||
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
|
||||
|
||||
#. Run the :command:`service cinder-volume restart` command to restart the
|
||||
Block Storage service.
|
||||
|
||||
Configuring iSCSI Multipathing
|
||||
------------------------------
|
||||
|
||||
To configure iSCSI Multipathing, follow the steps below:
|
||||
|
||||
#. Add the port group settings in the Huawei-customized driver configuration
|
||||
file and configure the port group name needed by an initiator.
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<iSCSI>
|
||||
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
|
||||
<Initiator Name="xxxxxx" TargetPortGroup="xxxx" />
|
||||
</iSCSI>
|
||||
|
||||
#. Enable the multipathing switch of the Compute service module.
|
||||
|
||||
Add ``volume_use_multipath = True`` in ``[libvirt]`` of
|
||||
``/etc/nova/nova.conf``.
|
||||
|
||||
#. Run the :command:`service nova-compute restart` command to restart the
|
||||
``nova-compute`` service.
|
||||
|
||||
Configuring FC Multipathing
|
||||
------------------------------
|
||||
|
||||
To configure FC Multipathing, follow the steps below:
|
||||
|
||||
#. Enable the multipathing switch of the Compute service module.
|
||||
|
||||
Add ``volume_use_multipath = True`` in ``[libvirt]`` of
|
||||
``/etc/nova/nova.conf``.
|
||||
|
||||
#. Run the :command:`service nova-compute restart` command to restart the
|
||||
``nova-compute`` service.
|
||||
|
||||
Configuring CHAP and ALUA
|
||||
-------------------------
|
||||
|
||||
On a public network, any application server whose IP address resides on the
|
||||
same network segment as that of the storage systems iSCSI host port can access
|
||||
the storage system and perform read and write operations in it. This poses
|
||||
risks to the data security of the storage system. To ensure the storage
|
||||
systems access security, you can configure ``CHAP`` authentication to control
|
||||
application servers access to the storage system.
|
||||
|
||||
Adjust the driver configuration file as follows:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<Initiator ALUA="xxx" CHAPinfo="xxx" Name="xxx" TargetIP="x.x.x.x"/>
|
||||
|
||||
``ALUA`` indicates a multipathing mode. 0 indicates that ``ALUA`` is disabled.
|
||||
1 indicates that ``ALUA`` is enabled. ``CHAPinfo`` indicates the user name and
|
||||
password authenticated by ``CHAP``. The format is ``mmuser; mm-user@storage``.
|
||||
The user name and password are separated by semicolons (``;``).
|
||||
|
||||
Configuring multiple storage
|
||||
----------------------------
|
||||
|
||||
Multiple storage systems configuration example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
enabled_backends = v3_fc, 18000_fc
|
||||
[v3_fc]
|
||||
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
|
||||
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_v3_fc.xml
|
||||
volume_backend_name = huawei_v3_fc
|
||||
[18000_fc]
|
||||
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
|
||||
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_fc.xml
|
||||
volume_backend_name = huawei_18000_fc
|
||||
|
||||
Configuration file parameters
|
||||
-----------------------------
|
||||
|
||||
This section describes mandatory and optional configuration file parameters
|
||||
of the Huawei volume driver.
|
||||
|
||||
.. list-table:: **Mandatory parameters**
|
||||
:widths: 10 10 50 10
|
||||
:header-rows: 1
|
||||
|
||||
* - Parameter
|
||||
- Default value
|
||||
- Description
|
||||
- Applicable to
|
||||
* - Product
|
||||
- ``-``
|
||||
- Type of a storage product. Possible values are ``TV2``, ``18000`` and
|
||||
``V3``.
|
||||
- All
|
||||
* - Protocol
|
||||
- ``-``
|
||||
- Type of a connection protocol. The possible value is either ``'iSCSI'``
|
||||
or ``'FC'``.
|
||||
- All
|
||||
* - RestURL
|
||||
- ``-``
|
||||
- Access address of the REST interface,
|
||||
``https://x.x.x.x/devicemanager/rest/``. The value ``x.x.x.x`` indicates
|
||||
the management IP address. OceanStor 18000 uses the preceding setting,
|
||||
and V2 and V3 requires you to add port number ``8088``, for example,
|
||||
``https://x.x.x.x:8088/deviceManager/rest/``. If you need to configure
|
||||
multiple RestURL, separate them by semicolons (;).
|
||||
- All
|
||||
* - UserName
|
||||
- ``-``
|
||||
- User name of a storage administrator.
|
||||
- All
|
||||
* - UserPassword
|
||||
- ``-``
|
||||
- Password of a storage administrator.
|
||||
- All
|
||||
* - StoragePool
|
||||
- ``-``
|
||||
- Name of a storage pool to be used. If you need to configure multiple
|
||||
storage pools, separate them by semicolons (``;``).
|
||||
- All
|
||||
|
||||
.. note::
|
||||
|
||||
The value of ``StoragePool`` cannot contain Chinese characters.
|
||||
|
||||
.. list-table:: **Optional parameters**
|
||||
:widths: 20 10 50 15
|
||||
:header-rows: 1
|
||||
|
||||
* - Parameter
|
||||
- Default value
|
||||
- Description
|
||||
- Applicable to
|
||||
* - LUNType
|
||||
- Thick
|
||||
- Type of the LUNs to be created. The value can be ``Thick`` or ``Thin``. Dorado series only support ``Thin`` LUNs.
|
||||
- All
|
||||
* - WriteType
|
||||
- 1
|
||||
- Cache write type, possible values are: ``1`` (write back), ``2``
|
||||
(write through), and ``3`` (mandatory write back).
|
||||
- All
|
||||
* - LUNcopyWaitInterval
|
||||
- 5
|
||||
- After LUN copy is enabled, the plug-in frequently queries the copy
|
||||
progress. You can set a value to specify the query interval.
|
||||
- All
|
||||
* - Timeout
|
||||
- 432000
|
||||
- Timeout interval for waiting LUN copy of a storage device to complete.
|
||||
The unit is second.
|
||||
- All
|
||||
* - Initiator Name
|
||||
- ``-``
|
||||
- Name of a compute node initiator.
|
||||
- All
|
||||
* - Initiator TargetIP
|
||||
- ``-``
|
||||
- IP address of the iSCSI port provided for compute nodes.
|
||||
- All
|
||||
* - Initiator TargetPortGroup
|
||||
- ``-``
|
||||
- IP address of the iSCSI target port that is provided for compute
|
||||
nodes.
|
||||
- All
|
||||
* - DefaultTargetIP
|
||||
- ``-``
|
||||
- Default IP address of the iSCSI target port that is provided for
|
||||
compute nodes.
|
||||
- All
|
||||
* - OSType
|
||||
- Linux
|
||||
- Operating system of the Nova compute node's host.
|
||||
- All
|
||||
* - HostIP
|
||||
- ``-``
|
||||
- IP address of the Nova compute node's host.
|
||||
- All
|
||||
* - metro_san_user
|
||||
- ``-``
|
||||
- User name of a storage administrator of hypermetro remote device.
|
||||
- V3R3/2600 V3R5/18000 V3R3
|
||||
* - metro_san_password
|
||||
- ``-``
|
||||
- Password of a storage administrator of hypermetro remote device.
|
||||
- V3R3/2600 V3R5/18000 V3R3
|
||||
* - metro_domain_name
|
||||
- ``-``
|
||||
- Hypermetro domain name configured on ISM.
|
||||
- V3R3/2600 V3R5/18000 V3R3
|
||||
* - metro_san_address
|
||||
- ``-``
|
||||
- Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address.
|
||||
- V3R3/2600 V3R5/18000 V3R3
|
||||
* - metro_storage_pools
|
||||
- ``-``
|
||||
- Remote storage pool for hypermetro.
|
||||
- V3R3/2600 V3R5/18000 V3R3
|
||||
* - backend_id
|
||||
- ``-``
|
||||
- Target device ID.
|
||||
- All
|
||||
* - storage_pool
|
||||
- ``-``
|
||||
- Pool name of target backend when failover for replication.
|
||||
- All
|
||||
* - san_address
|
||||
- ``-``
|
||||
- Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address.
|
||||
- All
|
||||
* - san_user
|
||||
- ``-``
|
||||
- User name of a storage administrator of replication remote device.
|
||||
- All
|
||||
* - san_password
|
||||
- ``-``
|
||||
- Password of a storage administrator of replication remote device.
|
||||
- All
|
||||
* - iscsi_default_target_ip
|
||||
- ``-``
|
||||
- Remote transacton port IP.
|
||||
- All
|
||||
.. important::
|
||||
|
||||
The ``Initiator Name``, ``Initiator TargetIP``, and
|
||||
``Initiator TargetPortGroup`` are ``ISCSI`` parameters and therefore not
|
||||
applicable to ``FC``.
|
@ -0,0 +1,242 @@
|
||||
=============================
|
||||
IBM FlashSystem volume driver
|
||||
=============================
|
||||
|
||||
The volume driver for FlashSystem provides OpenStack Block Storage hosts
|
||||
with access to IBM FlashSystems.
|
||||
|
||||
Configure FlashSystem
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configure storage array
|
||||
-----------------------
|
||||
|
||||
The volume driver requires a pre-defined array. You must create an
|
||||
array on the FlashSystem before using the volume driver. An existing array
|
||||
can also be used and existing data will not be deleted.
|
||||
|
||||
.. note::
|
||||
|
||||
FlashSystem can only create one array, so no configuration option is
|
||||
needed for the IBM FlashSystem driver to assign it.
|
||||
|
||||
Configure user authentication for the driver
|
||||
--------------------------------------------
|
||||
|
||||
The driver requires access to the FlashSystem management interface using
|
||||
SSH. It should be provided with the FlashSystem management IP using the
|
||||
``san_ip`` flag, and the management port should be provided by the
|
||||
``san_ssh_port`` flag. By default, the port value is configured to be
|
||||
port 22 (SSH).
|
||||
|
||||
.. note::
|
||||
|
||||
Make sure the compute node running the ``cinder-volume`` driver has SSH
|
||||
network access to the storage system.
|
||||
|
||||
Using password authentication, assign a password to the user on the
|
||||
FlashSystem. For more detail, see the driver configuration flags
|
||||
for the user and password here: :ref:`config_fc_flags`
|
||||
or :ref:`config_iscsi_flags`.
|
||||
|
||||
IBM FlashSystem FC driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Data Path configuration
|
||||
-----------------------
|
||||
|
||||
Using Fiber Channel (FC), each FlashSystem node should have at least one
|
||||
WWPN port configured. If the ``flashsystem_multipath_enabled`` flag is
|
||||
set to ``True`` in the Block Storage service configuration file, the driver
|
||||
uses all available WWPNs to attach the volume to the instance. If the flag is
|
||||
not set, the driver uses the WWPN associated with the volume's preferred node
|
||||
(if available). Otherwise, it uses the first available WWPN of the system. The
|
||||
driver obtains the WWPNs directly from the storage system. You do not need to
|
||||
provide these WWPNs to the driver.
|
||||
|
||||
.. note::
|
||||
|
||||
Using FC, ensure that the block storage hosts have FC connectivity
|
||||
to the FlashSystem.
|
||||
|
||||
.. _config_fc_flags:
|
||||
|
||||
Enable IBM FlashSystem FC driver
|
||||
--------------------------------
|
||||
|
||||
Set the volume driver to the FlashSystem driver by setting the
|
||||
``volume_driver`` option in the ``cinder.conf`` configuration file,
|
||||
as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.flashsystem_fc.FlashSystemFCDriver
|
||||
|
||||
To enable the IBM FlashSystem FC driver, configure the following options in the
|
||||
``cinder.conf`` configuration file:
|
||||
|
||||
.. list-table:: List of configuration flags for IBM FlashSystem FC driver
|
||||
:header-rows: 1
|
||||
|
||||
* - Flag name
|
||||
- Type
|
||||
- Default
|
||||
- Description
|
||||
* - ``san_ip``
|
||||
- Required
|
||||
-
|
||||
- Management IP or host name
|
||||
* - ``san_ssh_port``
|
||||
- Optional
|
||||
- 22
|
||||
- Management port
|
||||
* - ``san_login``
|
||||
- Required
|
||||
-
|
||||
- Management login user name
|
||||
* - ``san_password``
|
||||
- Required
|
||||
-
|
||||
- Management login password
|
||||
* - ``flashsystem_connection_protocol``
|
||||
- Required
|
||||
-
|
||||
- Connection protocol should be set to ``FC``
|
||||
* - ``flashsystem_multipath_enabled``
|
||||
- Required
|
||||
-
|
||||
- Enable multipath for FC connections
|
||||
* - ``flashsystem_multihost_enabled``
|
||||
- Optional
|
||||
- ``True``
|
||||
- Enable mapping vdisks to multiple hosts [1]_
|
||||
|
||||
.. [1]
|
||||
This option allows the driver to map a vdisk to more than one host at
|
||||
a time. This scenario occurs during migration of a virtual machine
|
||||
with an attached volume; the volume is simultaneously mapped to both
|
||||
the source and destination compute hosts. If your deployment does not
|
||||
require attaching vdisks to multiple hosts, setting this flag to
|
||||
``False`` will provide added safety.
|
||||
|
||||
IBM FlashSystem iSCSI driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Network configuration
|
||||
---------------------
|
||||
|
||||
Using iSCSI, each FlashSystem node should have at least one iSCSI port
|
||||
configured. iSCSI IP addresses of IBM FlashSystem can be obtained by
|
||||
FlashSystem GUI or CLI. For more information, see the
|
||||
appropriate IBM Redbook for the FlashSystem.
|
||||
|
||||
.. note::
|
||||
|
||||
Using iSCSI, ensure that the compute nodes have iSCSI network access
|
||||
to the IBM FlashSystem.
|
||||
|
||||
.. _config_iscsi_flags:
|
||||
|
||||
Enable IBM FlashSystem iSCSI driver
|
||||
-----------------------------------
|
||||
|
||||
Set the volume driver to the FlashSystem driver by setting the
|
||||
``volume_driver`` option in the ``cinder.conf`` configuration file, as
|
||||
follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.flashsystem_iscsi.FlashSystemISCSIDriver
|
||||
|
||||
To enable IBM FlashSystem iSCSI driver, configure the following options
|
||||
in the ``cinder.conf`` configuration file:
|
||||
|
||||
|
||||
.. list-table:: List of configuration flags for IBM FlashSystem iSCSI driver
|
||||
:header-rows: 1
|
||||
|
||||
* - Flag name
|
||||
- Type
|
||||
- Default
|
||||
- Description
|
||||
* - ``san_ip``
|
||||
- Required
|
||||
-
|
||||
- Management IP or host name
|
||||
* - ``san_ssh_port``
|
||||
- Optional
|
||||
- 22
|
||||
- Management port
|
||||
* - ``san_login``
|
||||
- Required
|
||||
-
|
||||
- Management login user name
|
||||
* - ``san_password``
|
||||
- Required
|
||||
-
|
||||
- Management login password
|
||||
* - ``flashsystem_connection_protocol``
|
||||
- Required
|
||||
-
|
||||
- Connection protocol should be set to ``iSCSI``
|
||||
* - ``flashsystem_multihost_enabled``
|
||||
- Optional
|
||||
- ``True``
|
||||
- Enable mapping vdisks to multiple hosts [2]_
|
||||
* - ``iscsi_ip_address``
|
||||
- Required
|
||||
-
|
||||
- Set to one of the iSCSI IP addresses obtained by FlashSystem GUI or CLI [3]_
|
||||
* - ``flashsystem_iscsi_portid``
|
||||
- Required
|
||||
-
|
||||
- Set to the id of the ``iscsi_ip_address`` obtained by FlashSystem GUI or CLI [4]_
|
||||
|
||||
.. [2]
|
||||
This option allows the driver to map a vdisk to more than one host at
|
||||
a time. This scenario occurs during migration of a virtual machine
|
||||
with an attached volume; the volume is simultaneously mapped to both
|
||||
the source and destination compute hosts. If your deployment does not
|
||||
require attaching vdisks to multiple hosts, setting this flag to
|
||||
``False`` will provide added safety.
|
||||
|
||||
.. [3]
|
||||
On the cluster of the FlashSystem, the ``iscsi_ip_address`` column is the
|
||||
seventh column ``IP_address`` of the output of ``lsportip``.
|
||||
|
||||
.. [4]
|
||||
On the cluster of the FlashSystem, port ID column is the first
|
||||
column ``id`` of the output of ``lsportip``,
|
||||
not the sixth column ``port_id``.
|
||||
|
||||
Limitations and known issues
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
IBM FlashSystem only works when:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
open_access_enabled=off
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
These operations are supported:
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
|
||||
- Create, list, and delete volume snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
- Get volume statistics.
|
||||
|
||||
- Manage and unmanage a volume.
|
@ -0,0 +1,228 @@
|
||||
================================
|
||||
IBM Spectrum Scale volume driver
|
||||
================================
|
||||
IBM Spectrum Scale is a flexible software-defined storage that can be
|
||||
deployed as high performance file storage or a cost optimized
|
||||
large-scale content repository. IBM Spectrum Scale, previously known as
|
||||
IBM General Parallel File System (GPFS), is designed to scale performance
|
||||
and capacity with no bottlenecks. IBM Spectrum Scale is a cluster file system
|
||||
that provides concurrent access to file systems from multiple nodes. The
|
||||
storage provided by these nodes can be direct attached, network attached,
|
||||
SAN attached, or a combination of these methods. Spectrum Scale provides
|
||||
many features beyond common data access, including data replication,
|
||||
policy based storage management, and space efficient file snapshot and
|
||||
clone operations.
|
||||
|
||||
How the Spectrum Scale volume driver works
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Spectrum Scale volume driver, named ``gpfs.py``, enables the use of
|
||||
Spectrum Scale in a fashion similar to that of the NFS driver. With
|
||||
the Spectrum Scale driver, instances do not actually access a storage
|
||||
device at the block level. Instead, volume backing files are created
|
||||
in a Spectrum Scale file system and mapped to instances, which emulate
|
||||
a block device.
|
||||
|
||||
.. note::
|
||||
|
||||
Spectrum Scale must be installed and cluster has to be created on the
|
||||
storage nodes in the OpenStack environment. A file system must also be
|
||||
created and mounted on these nodes before configuring the cinder service
|
||||
to use Spectrum Scale storage.For more details, please refer to
|
||||
`Spectrum Scale product documentation <https://ibm.biz/Bdi84g>`_.
|
||||
|
||||
Optionally, the Image service can be configured to store glance images
|
||||
in a Spectrum Scale file system. When a Block Storage volume is created
|
||||
from an image, if both image data and volume data reside in the same
|
||||
Spectrum Scale file system, the data from image file is moved efficiently
|
||||
to the volume file using copy-on-write optimization strategy.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Create cloned volumes.
|
||||
- Extend a volume.
|
||||
- Migrate a volume.
|
||||
- Retype a volume.
|
||||
- Create, delete consistency groups.
|
||||
- Create, delete consistency group snapshots.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Backup and restore volumes.
|
||||
|
||||
Driver configurations
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Spectrum Scale volume driver supports three modes of deployment.
|
||||
|
||||
Mode 1 – Pervasive Spectrum Scale Client
|
||||
----------------------------------------
|
||||
|
||||
When Spectrum Scale is running on compute nodes as well as on the cinder node.
|
||||
For example, Spectrum Scale filesystem is available to both Compute and
|
||||
Block Storage services as a local filesystem.
|
||||
|
||||
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
|
||||
in the ``cinder.conf`` as:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Spectrum Scale driver in this deployment mode.
|
||||
|
||||
.. include:: ../../tables/cinder-ibm_gpfs.rst
|
||||
|
||||
.. note::
|
||||
|
||||
The ``gpfs_images_share_mode`` flag is only valid if the Image
|
||||
Service is configured to use Spectrum Scale with the
|
||||
``gpfs_images_dir`` flag. When the value of this flag is
|
||||
``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
|
||||
and ``gpfs_images_dir`` flags must both reside in the same GPFS
|
||||
file system and in the same GPFS file set.
|
||||
|
||||
Mode 2 – Remote Spectrum Scale Driver with Local Compute Access
|
||||
---------------------------------------------------------------
|
||||
|
||||
When Spectrum Scale is running on compute nodes, but not on the Block Storage
|
||||
node. For example, Spectrum Scale filesystem is only available to Compute
|
||||
service as Local filesystem where as Block Storage service accesses Spectrum
|
||||
Scale remotely. In this case, ``cinder-volume`` service running Spectrum Scale
|
||||
driver access storage system over SSH and creates volume backing files to make
|
||||
them available on the compute nodes. This mode is typically deployed when the
|
||||
cinder and glance services are running inside a Linux container. The container
|
||||
host should have Spectrum Scale client running and GPFS filesystem mount path
|
||||
should be bind mounted into the Linux containers.
|
||||
|
||||
.. note::
|
||||
|
||||
Note that the user IDs present in the containers should match as that in the
|
||||
host machines. For example, the containers running cinder and glance
|
||||
services should be priviledged containers.
|
||||
|
||||
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
|
||||
in the ``cinder.conf`` as:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSRemoteDriver
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Spectrum Scale driver in this deployment mode.
|
||||
|
||||
.. include:: ../../tables/cinder-ibm_gpfs_remote.rst
|
||||
|
||||
.. note::
|
||||
|
||||
The ``gpfs_images_share_mode`` flag is only valid if the Image
|
||||
Service is configured to use Spectrum Scale with the
|
||||
``gpfs_images_dir`` flag. When the value of this flag is
|
||||
``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
|
||||
and ``gpfs_images_dir`` flags must both reside in the same GPFS
|
||||
file system and in the same GPFS file set.
|
||||
|
||||
Mode 3 – Remote Spectrum Scale Access
|
||||
-------------------------------------
|
||||
|
||||
When both Compute and Block Storage nodes are not running Spectrum Scale
|
||||
software and do not have access to Spectrum Scale file system directly as
|
||||
local filesystem. In this case, we create an NFS export on the volume path
|
||||
and make it available on the cinder node and on compute nodes.
|
||||
|
||||
Optionally, if one wants to use the copy-on-write optimization to create
|
||||
bootable volumes from glance images, one need to also export the glance
|
||||
images path and mount it on the nodes where glance and cinder services
|
||||
are running. The cinder and glance services will access the GPFS
|
||||
filesystem through NFS.
|
||||
|
||||
To use Spectrum Scale driver in this deployment mode, set the ``volume_driver``
|
||||
in the ``cinder.conf`` as:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSNFSDriver
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Spectrum Scale driver in this deployment mode.
|
||||
|
||||
.. include:: ../../tables/cinder-ibm_gpfs_nfs.rst
|
||||
|
||||
Additionally, all the options of the base NFS driver are applicable
|
||||
for GPFSNFSDriver. The above table lists the basic configuration
|
||||
options which are needed for initialization of the driver.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``gpfs_images_share_mode`` flag is only valid if the Image
|
||||
Service is configured to use Spectrum Scale with the
|
||||
``gpfs_images_dir`` flag. When the value of this flag is
|
||||
``copy_on_write``, the paths specified by the ``gpfs_mount_point_base``
|
||||
and ``gpfs_images_dir`` flags must both reside in the same GPFS
|
||||
file system and in the same GPFS file set.
|
||||
|
||||
|
||||
Volume creation options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is possible to specify additional volume configuration options on a
|
||||
per-volume basis by specifying volume metadata. The volume is created
|
||||
using the specified options. Changing the metadata after the volume is
|
||||
created has no effect. The following table lists the volume creation
|
||||
options supported by the GPFS volume driver.
|
||||
|
||||
.. list-table:: **Volume Create Options for Spectrum Scale Volume Drivers**
|
||||
:widths: 10 25
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata Item Name
|
||||
- Description
|
||||
* - fstype
|
||||
- Specifies whether to create a file system or a swap area on the new volume. If fstype=swap is specified, the mkswap command is used to create a swap area. Otherwise the mkfs command is passed the specified file system type, for example ext3, ext4 or ntfs.
|
||||
* - fslabel
|
||||
- Sets the file system label for the file system specified by fstype option. This value is only used if fstype is specified.
|
||||
* - data_pool_name
|
||||
- Specifies the GPFS storage pool to which the volume is to be assigned. Note: The GPFS storage pool must already have been created.
|
||||
* - replicas
|
||||
- Specifies how many copies of the volume file to create. Valid values are 1, 2, and, for Spectrum Scale V3.5.0.7 and later, 3. This value cannot be greater than the value of the MaxDataReplicasattribute of the file system.
|
||||
* - dio
|
||||
- Enables or disables the Direct I/O caching policy for the volume file. Valid values are yes and no.
|
||||
* - write_affinity_depth
|
||||
- Specifies the allocation policy to be used for the volume file. Note: This option only works if allow-write-affinity is set for the GPFS data pool.
|
||||
* - block_group_factor
|
||||
- Specifies how many blocks are laid out sequentially in the volume file to behave as a single large block. Note: This option only works if allow-write-affinity is set for the GPFS data pool.
|
||||
* - write_affinity_failure_group
|
||||
- Specifies the range of nodes (in GPFS shared nothing architecture) where replicas of blocks in the volume file are to be written. See Spectrum Scale documentation for more details about this option.
|
||||
|
||||
This example shows the creation of a 50GB volume with an ``ext4`` file
|
||||
system labeled ``newfs`` and direct IO enabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --property fstype=ext4 fslabel=newfs dio=yes \
|
||||
--size 50 VOLUME
|
||||
|
||||
Note that if the metadata for the volume is changed later, the changes
|
||||
do not reflect in the backend. User will have to manually change the
|
||||
volume attributes corresponding to metadata on Spectrum Scale filesystem.
|
||||
|
||||
Operational notes for GPFS driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Volume snapshots are implemented using the GPFS file clone feature.
|
||||
Whenever a new snapshot is created, the snapshot file is efficiently
|
||||
created as a read-only clone parent of the volume, and the volume file
|
||||
uses copy-on-write optimization strategy to minimize data movement.
|
||||
|
||||
Similarly when a new volume is created from a snapshot or from an
|
||||
existing volume, the same approach is taken. The same approach is also
|
||||
used when a new volume is created from an Image service image, if the
|
||||
source image is in raw format, and ``gpfs_images_share_mode`` is set to
|
||||
``copy_on_write``.
|
||||
|
||||
The Spectrum Scale driver supports encrypted volume back end feature.
|
||||
To encrypt a volume at rest, specify the extra specification
|
||||
``gpfs_encryption_rest = True``.
|
@ -0,0 +1,172 @@
|
||||
================================
|
||||
IBM Storage Driver for OpenStack
|
||||
================================
|
||||
|
||||
Introduction
|
||||
~~~~~~~~~~~~
|
||||
The IBM Storage Driver for OpenStack is a software component of the
|
||||
OpenStack cloud environment that enables utilization of storage
|
||||
resources provided by supported IBM storage systems.
|
||||
|
||||
The driver was validated on the following storage systems:
|
||||
|
||||
* IBM DS8000 Family
|
||||
* IBM FlashSystem A9000
|
||||
* IBM FlashSystem A9000R
|
||||
* IBM Spectrum Accelerate
|
||||
* IBM XIV Storage System
|
||||
|
||||
After the driver is configured on the OpenStack cinder nodes, storage volumes
|
||||
can be allocated by the cinder nodes to the nova nodes. Virtual machines on
|
||||
the nova nodes can then utilize these storage resources.
|
||||
|
||||
.. note::
|
||||
Unless stated otherwise, all references to XIV storage
|
||||
system in this guide relate all members of the Spectrum Accelerate
|
||||
Family (XIV, Spectrum Accelerate and FlashSystem A9000/A9000R).
|
||||
|
||||
Concept diagram
|
||||
---------------
|
||||
This figure illustrates how an IBM storage system is connected
|
||||
to the OpenStack cloud environment and provides storage resources when the
|
||||
IBM Storage Driver for OpenStack is configured on the OpenStack cinder nodes.
|
||||
The OpenStack cloud is connected to the IBM storage system over Fibre
|
||||
Channel or iSCSI (DS8000 systems support only Fibre Channel connections).
|
||||
Remote cloud users can issue requests for storage resources from the
|
||||
OpenStack cloud. These requests are transparently handled by the IBM Storage
|
||||
Driver, which communicates with the IBM storage system and controls the
|
||||
storage volumes on it. The IBM storage resources are then provided to the
|
||||
nova nodes in the OpenStack cloud.
|
||||
|
||||
.. figure:: ../../figures/ibm-storage-nova-concept.png
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Configure the driver manually by changing the ``cinder.conf`` file as
|
||||
follows:
|
||||
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.ibm.ibm_storage.IBMStorageDriver
|
||||
|
||||
.. include:: ../../tables/cinder-ibm_storage.rst
|
||||
|
||||
|
||||
|
||||
Security
|
||||
~~~~~~~~
|
||||
|
||||
The following information provides an overview of security for the IBM
|
||||
Storage Driver for OpenStack.
|
||||
|
||||
Avoiding man-in-the-middle attacks
|
||||
----------------------------------
|
||||
|
||||
When using a Spectrum Accelerate Family product, you can prevent
|
||||
man-in-the-middle (MITM) attacks by following these rules:
|
||||
|
||||
* Upgrade to IBM XIV storage system version 11.3 or later.
|
||||
|
||||
* If working in a secure mode, do not work insecurely against another storage
|
||||
system in the same environment.
|
||||
|
||||
* Validate the storage certificate. If you are using an XIV-provided
|
||||
certificate, use the CA file that was provided with your storage system
|
||||
(``XIV-CA.pem``). The certificate files should be copied to one of the
|
||||
following directories:
|
||||
|
||||
* ``/etc/ssl/certs``
|
||||
* ``/etc/ssl/certs/xiv``
|
||||
* ``/etc/pki``
|
||||
* ``/etc/pki/xiv``
|
||||
|
||||
If you are using your own certificates, copy them to the same directories
|
||||
with the prefix ``XIV`` and in the ``.pem`` format.
|
||||
For example: ``XIV-my_cert.pem``.
|
||||
|
||||
* To prevent the CVE-2014-3566 MITM attack, follow the OpenStack
|
||||
community
|
||||
`directions <http://osdir.com/ml/openstack-dev/2014-10/msg01349.html>`_.
|
||||
|
||||
Troubleshooting
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Refer to this information to troubleshoot technical problems that you
|
||||
might encounter when using the IBM Storage Driver for OpenStack.
|
||||
|
||||
Checking the cinder node log files
|
||||
----------------------------------
|
||||
|
||||
The cinder log files record operation information that might be useful
|
||||
for troubleshooting.
|
||||
|
||||
To achieve optimal and clear logging of events, activate the verbose
|
||||
logging level in the ``cinder.conf`` file, located in the ``/etc/cinder``
|
||||
folder. Add the following line in the file, save the file, and then
|
||||
restart the ``cinder-volume`` service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
verbose = True
|
||||
debug = True
|
||||
|
||||
To turn off the verbose logging level, change ``True`` to ``False``,
|
||||
save the file, and then restart the ``cinder-volume`` service.
|
||||
|
||||
Check the log files on a periodic basis to ensure that the IBM
|
||||
Storage Driver is functioning properly:
|
||||
|
||||
#. Log into the cinder node.
|
||||
#. Go to the ``/var/log/cinder`` folder
|
||||
#. Open the activity log file named ``cinder-volume.log`` or ``volume.log``.
|
||||
The IBM Storage Driver writes to this log file using the
|
||||
``[IBM DS8K STORAGE]`` or ``[IBM XIV STORAGE]`` prefix (depending on
|
||||
the relevant storage system) for each event that it records in the file.
|
||||
|
||||
|
||||
Best practices
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
This section contains the general guidance and best practices.
|
||||
|
||||
Working with multi-tenancy
|
||||
--------------------------
|
||||
The XIV storage systems, running microcode version 11.5 or later, Spectrum
|
||||
Accelerate and FlashSystem A9000/A9000R can employ multi-tenancy.
|
||||
|
||||
In order to use multi-tenancy with the IBM Storage Driver for OpenStack:
|
||||
|
||||
* For each storage system, verify that all predefined storage pools are
|
||||
in the same domain or, that all are not in a domain.
|
||||
|
||||
* Use either storage administrator or domain administrator user's
|
||||
credentials, as long as the credentials grant a full access to the relevant
|
||||
pool.
|
||||
* If the user is a domain administrator, the storage system domain
|
||||
access policy can be ``CLOSED`` (``domain_policy: access=CLOSED``).
|
||||
Otherwise, verify that the storage system domain access policy is
|
||||
``OPEN`` (``domain_policy: access=OPEN``).
|
||||
* If the user is not a domain administrator, the host management policy
|
||||
of the storage system domain can be ``BASIC`` (``domain_policy:
|
||||
host_management=BASIC``). Otherwise, verify that the storage
|
||||
system domain host management policy is ``EXTENDED``
|
||||
(``domain_policy: host_management=EXTENDED``).
|
||||
|
||||
Working with IBM Real-time Compression™
|
||||
---------------------------------------
|
||||
XIV storage systems running microcode version 11.6 or later,
|
||||
Spectrum Accelerate and FlashSystem A9000/A9000R can employ IBM
|
||||
Real-time Compression™.
|
||||
|
||||
Follow these guidelines when working with compressed storage
|
||||
resources using the IBM Storage Driver for OpenStack:
|
||||
|
||||
* Compression mode cannot be changed for storage volumes, using
|
||||
the IBM Storage Driver for OpenStack. The volumes are created
|
||||
according to the default compression mode of the pool. For example,
|
||||
any volume created in a compressed pool will be compressed as well.
|
||||
|
||||
* The minimum size for a compressed storage volume is 87 GB.
|
||||
|
@ -0,0 +1,499 @@
|
||||
=========================================
|
||||
IBM Storwize family and SVC volume driver
|
||||
=========================================
|
||||
|
||||
The volume management driver for Storwize family and SAN Volume
|
||||
Controller (SVC) provides OpenStack Compute instances with access to IBM
|
||||
Storwize family or SVC storage systems.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Storwize/SVC driver supports the following Block Storage service volume
|
||||
operations:
|
||||
|
||||
- Create, list, delete, attach (map), and detach (unmap) volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Retype a volume.
|
||||
- Create a volume from a snapshot.
|
||||
- Create, list, and delete consistency group.
|
||||
- Create, list, and delete consistency group snapshot.
|
||||
- Modify consistency group (add or remove volumes).
|
||||
- Create consistency group from source (source can be a CG or CG snapshot)
|
||||
- Manage an existing volume.
|
||||
- Failover-host for replicated back ends.
|
||||
- Failback-host for replicated back ends.
|
||||
|
||||
Configure the Storwize family and SVC system
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Network configuration
|
||||
---------------------
|
||||
|
||||
The Storwize family or SVC system must be configured for iSCSI, Fibre
|
||||
Channel, or both.
|
||||
|
||||
If using iSCSI, each Storwize family or SVC node should have at least
|
||||
one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP
|
||||
address associated with the volume's preferred node (if available) to
|
||||
attach the volume to the instance, otherwise it uses the first available
|
||||
iSCSI IP address of the system. The driver obtains the iSCSI IP address
|
||||
directly from the storage system. You do not need to provide these iSCSI
|
||||
IP addresses directly to the driver.
|
||||
|
||||
.. note::
|
||||
|
||||
If using iSCSI, ensure that the compute nodes have iSCSI network
|
||||
access to the Storwize family or SVC system.
|
||||
|
||||
If using Fibre Channel (FC), each Storwize family or SVC node should
|
||||
have at least one WWPN port configured. The driver uses all available
|
||||
WWPNs to attach the volume to the instance. The driver obtains the
|
||||
WWPNs directly from the storage system. You do not need to provide
|
||||
these WWPNs directly to the driver.
|
||||
|
||||
.. note::
|
||||
|
||||
If using FC, ensure that the compute nodes have FC connectivity to
|
||||
the Storwize family or SVC system.
|
||||
|
||||
iSCSI CHAP authentication
|
||||
-------------------------
|
||||
|
||||
If using iSCSI for data access and the
|
||||
``storwize_svc_iscsi_chap_enabled`` is set to ``True``, the driver will
|
||||
associate randomly-generated CHAP secrets with all hosts on the Storwize
|
||||
family system. The compute nodes use these secrets when creating
|
||||
iSCSI connections.
|
||||
|
||||
.. warning::
|
||||
|
||||
CHAP secrets are added to existing hosts as well as newly-created
|
||||
ones. If the CHAP option is enabled, hosts will not be able to
|
||||
access the storage without the generated secrets.
|
||||
|
||||
.. note::
|
||||
|
||||
Not all OpenStack Compute drivers support CHAP authentication.
|
||||
Please check compatibility before using.
|
||||
|
||||
.. note::
|
||||
|
||||
CHAP secrets are passed from OpenStack Block Storage to Compute in
|
||||
clear text. This communication should be secured to ensure that CHAP
|
||||
secrets are not discovered.
|
||||
|
||||
Configure storage pools
|
||||
-----------------------
|
||||
|
||||
The IBM Storwize/SVC driver can allocate volumes in multiple pools.
|
||||
The pools should be created in advance and be provided to the driver
|
||||
using the ``storwize_svc_volpool_name`` configuration flag in the form
|
||||
of a comma-separated list.
|
||||
For the complete list of configuration flags, see :ref:`config_flags`.
|
||||
|
||||
Configure user authentication for the driver
|
||||
--------------------------------------------
|
||||
|
||||
The driver requires access to the Storwize family or SVC system
|
||||
management interface. The driver communicates with the management using
|
||||
SSH. The driver should be provided with the Storwize family or SVC
|
||||
management IP using the ``san_ip`` flag, and the management port should
|
||||
be provided by the ``san_ssh_port`` flag. By default, the port value is
|
||||
configured to be port 22 (SSH). Also, you can set the secondary
|
||||
management IP using the ``storwize_san_secondary_ip`` flag.
|
||||
|
||||
.. note::
|
||||
|
||||
Make sure the compute node running the cinder-volume management
|
||||
driver has SSH network access to the storage system.
|
||||
|
||||
To allow the driver to communicate with the Storwize family or SVC
|
||||
system, you must provide the driver with a user on the storage system.
|
||||
The driver has two authentication methods: password-based authentication
|
||||
and SSH key pair authentication. The user should have an Administrator
|
||||
role. It is suggested to create a new user for the management driver.
|
||||
Please consult with your storage and security administrator regarding
|
||||
the preferred authentication method and how passwords or SSH keys should
|
||||
be stored in a secure manner.
|
||||
|
||||
.. note::
|
||||
|
||||
When creating a new user on the Storwize or SVC system, make sure
|
||||
the user belongs to the Administrator group or to another group that
|
||||
has an Administrator role.
|
||||
|
||||
If using password authentication, assign a password to the user on the
|
||||
Storwize or SVC system. The driver configuration flags for the user and
|
||||
password are ``san_login`` and ``san_password``, respectively.
|
||||
|
||||
If you are using the SSH key pair authentication, create SSH private and
|
||||
public keys using the instructions below or by any other method.
|
||||
Associate the public key with the user by uploading the public key:
|
||||
select the :guilabel:`choose file` option in the Storwize family or SVC
|
||||
management GUI under :guilabel:`SSH public key`. Alternatively, you may
|
||||
associate the SSH public key using the command-line interface; details can
|
||||
be found in the Storwize and SVC documentation. The private key should be
|
||||
provided to the driver using the ``san_private_key`` configuration flag.
|
||||
|
||||
Create a SSH key pair with OpenSSH
|
||||
----------------------------------
|
||||
|
||||
You can create an SSH key pair using OpenSSH, by running:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh-keygen -t rsa
|
||||
|
||||
The command prompts for a file to save the key pair. For example, if you
|
||||
select ``key`` as the filename, two files are created: ``key`` and
|
||||
``key.pub``. The ``key`` file holds the private SSH key and ``key.pub``
|
||||
holds the public SSH key.
|
||||
|
||||
The command also prompts for a pass phrase, which should be empty.
|
||||
|
||||
The private key file should be provided to the driver using the
|
||||
``san_private_key`` configuration flag. The public key should be
|
||||
uploaded to the Storwize family or SVC system using the storage
|
||||
management GUI or command-line interface.
|
||||
|
||||
.. note::
|
||||
|
||||
Ensure that Cinder has read permissions on the private key file.
|
||||
|
||||
Configure the Storwize family and SVC driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Enable the Storwize family and SVC driver
|
||||
-----------------------------------------
|
||||
|
||||
Set the volume driver to the Storwize family and SVC driver by setting
|
||||
the ``volume_driver`` option in the ``cinder.conf`` file as follows:
|
||||
|
||||
iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[svc1234]
|
||||
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver
|
||||
san_ip = 1.2.3.4
|
||||
san_login = superuser
|
||||
san_password = passw0rd
|
||||
storwize_svc_volpool_name = cinder_pool1
|
||||
volume_backend_name = svc1234
|
||||
|
||||
FC:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[svc1234]
|
||||
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
|
||||
san_ip = 1.2.3.4
|
||||
san_login = superuser
|
||||
san_password = passw0rd
|
||||
storwize_svc_volpool_name = cinder_pool1
|
||||
volume_backend_name = svc1234
|
||||
|
||||
Replication configuration
|
||||
-------------------------
|
||||
|
||||
Add the following to the back-end specification to specify another storage
|
||||
to replicate to:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
replication_device = backend_id:rep_svc,
|
||||
san_ip:1.2.3.5,
|
||||
san_login:superuser,
|
||||
san_password:passw0rd,
|
||||
pool_name:cinder_pool1
|
||||
|
||||
The ``backend_id`` is a unique name of the remote storage, the ``san_ip``,
|
||||
``san_login``, and ``san_password`` is authentication information for the
|
||||
remote storage. The ``pool_name`` is the pool name for the replication
|
||||
target volume.
|
||||
|
||||
.. note::
|
||||
|
||||
Only one ``replication_device`` can be configured for one back end
|
||||
storage since only one replication target is supported now.
|
||||
|
||||
.. _config_flags:
|
||||
|
||||
Storwize family and SVC driver options in cinder.conf
|
||||
-----------------------------------------------------
|
||||
|
||||
The following options specify default values for all volumes. Some can
|
||||
be over-ridden using volume types, which are described below.
|
||||
|
||||
.. include:: ../../tables/cinder-storwize.rst
|
||||
|
||||
Note the following:
|
||||
|
||||
* The authentication requires either a password (``san_password``) or
|
||||
SSH private key (``san_private_key``). One must be specified. If
|
||||
both are specified, the driver uses only the SSH private key.
|
||||
|
||||
* The driver creates thin-provisioned volumes by default. The
|
||||
``storwize_svc_vol_rsize`` flag defines the initial physical
|
||||
allocation percentage for thin-provisioned volumes, or if set to
|
||||
``-1``, the driver creates full allocated volumes. More details about
|
||||
the available options are available in the Storwize family and SVC
|
||||
documentation.
|
||||
|
||||
|
||||
Placement with volume types
|
||||
---------------------------
|
||||
|
||||
The IBM Storwize/SVC driver exposes capabilities that can be added to
|
||||
the ``extra specs`` of volume types, and used by the filter
|
||||
scheduler to determine placement of new volumes. Make sure to prefix
|
||||
these keys with ``capabilities:`` to indicate that the scheduler should
|
||||
use them. The following ``extra specs`` are supported:
|
||||
|
||||
- ``capabilities:volume_back-end_name`` - Specify a specific back-end
|
||||
where the volume should be created. The back-end name is a
|
||||
concatenation of the name of the IBM Storwize/SVC storage system as
|
||||
shown in ``lssystem``, an underscore, and the name of the pool (mdisk
|
||||
group). For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
capabilities:volume_back-end_name=myV7000_openstackpool
|
||||
|
||||
- ``capabilities:compression_support`` - Specify a back-end according to
|
||||
compression support. A value of ``True`` should be used to request a
|
||||
back-end that supports compression, and a value of ``False`` will
|
||||
request a back-end that does not support compression. If you do not
|
||||
have constraints on compression support, do not set this key. Note
|
||||
that specifying ``True`` does not enable compression; it only
|
||||
requests that the volume be placed on a back-end that supports
|
||||
compression. Example syntax:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
capabilities:compression_support='<is> True'
|
||||
|
||||
- ``capabilities:easytier_support`` - Similar semantics as the
|
||||
``compression_support`` key, but for specifying according to support
|
||||
of the Easy Tier feature. Example syntax:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
capabilities:easytier_support='<is> True'
|
||||
|
||||
- ``capabilities:storage_protocol`` - Specifies the connection protocol
|
||||
used to attach volumes of this type to instances. Legal values are
|
||||
``iSCSI`` and ``FC``. This ``extra specs`` value is used for both placement
|
||||
and setting the protocol used for this volume. In the example syntax,
|
||||
note ``<in>`` is used as opposed to ``<is>`` which is used in the
|
||||
previous examples.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
capabilities:storage_protocol='<in> FC'
|
||||
|
||||
Configure per-volume creation options
|
||||
-------------------------------------
|
||||
|
||||
Volume types can also be used to pass options to the IBM Storwize/SVC
|
||||
driver, which over-ride the default values set in the configuration
|
||||
file. Contrary to the previous examples where the ``capabilities`` scope
|
||||
was used to pass parameters to the Cinder scheduler, options can be
|
||||
passed to the IBM Storwize/SVC driver with the ``drivers`` scope.
|
||||
|
||||
The following ``extra specs`` keys are supported by the IBM Storwize/SVC
|
||||
driver:
|
||||
|
||||
- rsize
|
||||
- warning
|
||||
- autoexpand
|
||||
- grainsize
|
||||
- compression
|
||||
- easytier
|
||||
- multipath
|
||||
- iogrp
|
||||
|
||||
These keys have the same semantics as their counterparts in the
|
||||
configuration file. They are set similarly; for example, ``rsize=2`` or
|
||||
``compression=False``.
|
||||
|
||||
Example: Volume types
|
||||
---------------------
|
||||
|
||||
In the following example, we create a volume type to specify a
|
||||
controller that supports iSCSI and compression, to use iSCSI when
|
||||
attaching the volume, and to enable compression:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create compressed
|
||||
$ openstack volume type set --property capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
|
||||
|
||||
We can then create a 50GB volume using this type:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create "compressed volume" --type compressed --size 50
|
||||
|
||||
In the following example, create a volume type that enables
|
||||
synchronous replication (metro mirror):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create ReplicationType
|
||||
$ openstack volume type set --property replication_type="<in> metro" \
|
||||
--property replication_enabled='<is> True' --property volume_backend_name=svc234 ReplicationType
|
||||
|
||||
Volume types can be used, for example, to provide users with different
|
||||
|
||||
- performance levels (such as, allocating entirely on an HDD tier,
|
||||
using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD
|
||||
tier)
|
||||
|
||||
- resiliency levels (such as, allocating volumes in pools with
|
||||
different RAID levels)
|
||||
|
||||
- features (such as, enabling/disabling Real-time Compression,
|
||||
replication volume creation)
|
||||
|
||||
QOS
|
||||
---
|
||||
|
||||
The Storwize driver provides QOS support for storage volumes by
|
||||
controlling the I/O amount. QOS is enabled by editing the
|
||||
``etc/cinder/cinder.conf`` file and setting the
|
||||
``storwize_svc_allow_tenant_qos`` to ``True``.
|
||||
|
||||
There are three ways to set the Storwize ``IOThrotting`` parameter for
|
||||
storage volumes:
|
||||
|
||||
- Add the ``qos:IOThrottling`` key into a QOS specification and
|
||||
associate it with a volume type.
|
||||
|
||||
- Add the ``qos:IOThrottling`` key into an extra specification with a
|
||||
volume type.
|
||||
|
||||
- Add the ``qos:IOThrottling`` key to the storage volume metadata.
|
||||
|
||||
.. note::
|
||||
|
||||
If you are changing a volume type with QOS to a new volume type
|
||||
without QOS, the QOS configuration settings will be removed.
|
||||
|
||||
Operational notes for the Storwize family and SVC driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Migrate volumes
|
||||
---------------
|
||||
|
||||
In the context of OpenStack Block Storage's volume migration feature,
|
||||
the IBM Storwize/SVC driver enables the storage's virtualization
|
||||
technology. When migrating a volume from one pool to another, the volume
|
||||
will appear in the destination pool almost immediately, while the
|
||||
storage moves the data in the background.
|
||||
|
||||
.. note::
|
||||
|
||||
To enable this feature, both pools involved in a given volume
|
||||
migration must have the same values for ``extent_size``. If the
|
||||
pools have different values for ``extent_size``, the data will still
|
||||
be moved directly between the pools (not host-side copy), but the
|
||||
operation will be synchronous.
|
||||
|
||||
Extend volumes
|
||||
--------------
|
||||
|
||||
The IBM Storwize/SVC driver allows for extending a volume's size, but
|
||||
only for volumes without snapshots.
|
||||
|
||||
Snapshots and clones
|
||||
--------------------
|
||||
|
||||
Snapshots are implemented using FlashCopy with no background copy
|
||||
(space-efficient). Volume clones (volumes created from existing volumes)
|
||||
are implemented with FlashCopy, but with background copy enabled. This
|
||||
means that volume clones are independent, full copies. While this
|
||||
background copy is taking place, attempting to delete or extend the
|
||||
source volume will result in that operation waiting for the copy to
|
||||
complete.
|
||||
|
||||
Volume retype
|
||||
-------------
|
||||
|
||||
The IBM Storwize/SVC driver enables you to modify volume types. When you
|
||||
modify volume types, you can also change these extra specs properties:
|
||||
|
||||
- rsize
|
||||
|
||||
- warning
|
||||
|
||||
- autoexpand
|
||||
|
||||
- grainsize
|
||||
|
||||
- compression
|
||||
|
||||
- easytier
|
||||
|
||||
- iogrp
|
||||
|
||||
- nofmtdisk
|
||||
|
||||
.. note::
|
||||
|
||||
When you change the ``rsize``, ``grainsize`` or ``compression``
|
||||
properties, volume copies are asynchronously synchronized on the
|
||||
array.
|
||||
|
||||
.. note::
|
||||
|
||||
To change the ``iogrp`` property, IBM Storwize/SVC firmware version
|
||||
6.4.0 or later is required.
|
||||
|
||||
Replication operation
|
||||
---------------------
|
||||
|
||||
A volume is only replicated if the volume is created with a volume-type
|
||||
that has the extra spec ``replication_enabled`` set to ``<is> True``. Two
|
||||
types of replication are supported now, async (global mirror) and
|
||||
sync (metro mirror). It can be specified by a volume-type that has the
|
||||
extra spec ``replication_type`` set to ``<in> global`` or
|
||||
``replication_type`` set to ``<in> metro``. If no ``replication_type`` is
|
||||
specified, global mirror will be created for replication.
|
||||
|
||||
.. note::
|
||||
|
||||
It is better to establish the partnership relationship between
|
||||
the replication source storage and the replication target
|
||||
storage manually on the storage back end before replication
|
||||
volume creation.
|
||||
|
||||
The ``failover-host`` command is designed for the case where the primary
|
||||
storage is down.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder failover-host cinder@svciscsi --backend_id target_svc_id
|
||||
|
||||
If a failover command has been executed and the primary storage has
|
||||
been restored, it is possible to do a failback by simply specifying
|
||||
default as the ``backend_id``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder failover-host cinder@svciscsi --backend_id default
|
||||
|
||||
.. note::
|
||||
|
||||
Before you perform a failback operation, synchronize the data
|
||||
from the replication target volume to the primary one on the
|
||||
storage back end manually, and do the failback only after the
|
||||
synchronization is done since the synchronization may take a long time.
|
||||
If the synchronization is not done manually, Storwize Block Storage
|
||||
service driver will perform the synchronization and do the failback
|
||||
after the synchronization is finished.
|
@ -0,0 +1,182 @@
|
||||
========================================
|
||||
INFINIDAT InfiniBox Block Storage driver
|
||||
========================================
|
||||
|
||||
The INFINIDAT Block Storage volume driver provides iSCSI and Fibre Channel
|
||||
support for INFINIDAT InfiniBox storage systems.
|
||||
|
||||
This section explains how to configure the INFINIDAT driver.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy a volume to an image.
|
||||
* Copy an image to a volume.
|
||||
* Clone a volume.
|
||||
* Extend a volume.
|
||||
* Get volume statistics.
|
||||
* Create, modify, delete, and list consistency groups.
|
||||
* Create, modify, delete, and list snapshots of consistency groups.
|
||||
* Create consistency group from consistency group or consistency group
|
||||
snapshot.
|
||||
|
||||
External package installation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The driver requires the ``infinisdk`` package for communicating with
|
||||
InfiniBox systems. Install the package from PyPI using the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install infinisdk
|
||||
|
||||
Setting up the storage array
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a storage pool object on the InfiniBox array in advance.
|
||||
The storage pool will contain volumes managed by OpenStack.
|
||||
Refer to the InfiniBox manuals for details on pool management.
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Edit the ``cinder.conf`` file, which is usually located under the following
|
||||
path ``/etc/cinder/cinder.conf``.
|
||||
|
||||
* Add a section for the INFINIDAT driver back end.
|
||||
|
||||
* Under the ``[DEFAULT]`` section, set the ``enabled_backends`` parameter with
|
||||
the name of the new back-end section.
|
||||
|
||||
Configure the driver back-end section with the parameters below.
|
||||
|
||||
* Configure the driver name by setting the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.infinidat.InfiniboxVolumeDriver
|
||||
|
||||
* Configure the management IP of the InfiniBox array by adding the following
|
||||
parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_ip = InfiniBox management IP
|
||||
|
||||
* Configure user credentials.
|
||||
|
||||
The driver requires an InfiniBox user with administrative privileges.
|
||||
We recommend creating a dedicated OpenStack user account
|
||||
that holds an administrative user role.
|
||||
Refer to the InfiniBox manuals for details on user account management.
|
||||
Configure the user credentials by adding the following parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_login = infinibox_username
|
||||
san_password = infinibox_password
|
||||
|
||||
* Configure the name of the InfiniBox pool by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
infinidat_pool_name = Pool defined in InfiniBox
|
||||
|
||||
* The back-end name is an identifier for the back end.
|
||||
We recommend using the same name as the name of the section.
|
||||
Configure the back-end name by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_backend_name = back-end name
|
||||
|
||||
* Thin provisioning.
|
||||
|
||||
The INFINIDAT driver supports creating thin or thick provisioned volumes.
|
||||
Configure thin or thick provisioning by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
san_thin_provision = true/false
|
||||
|
||||
This parameter defaults to ``true``.
|
||||
|
||||
* Configure the connectivity protocol.
|
||||
|
||||
The InfiniBox driver supports connection to the InfiniBox system in both
|
||||
the fibre channel and iSCSI protocols.
|
||||
Configure the desired protocol by adding the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
infinidat_storage_protocol = iscsi/fc
|
||||
|
||||
This parameter defaults to ``fc``.
|
||||
|
||||
* Configure iSCSI netspaces.
|
||||
|
||||
When using the iSCSI protocol to connect to InfiniBox systems, you must
|
||||
configure one or more iSCSI network spaces in the InfiniBox storage array.
|
||||
Refer to the InfiniBox manuals for details on network space management.
|
||||
Configure the names of the iSCSI network spaces to connect to by adding
|
||||
the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
infinidat_iscsi_netspaces = iscsi_netspace
|
||||
|
||||
Multiple network spaces can be specified by a comma separated string.
|
||||
|
||||
This parameter is ignored when using the FC protocol.
|
||||
|
||||
* Configure CHAP
|
||||
|
||||
InfiniBox supports CHAP authentication when using the iSCSI protocol. To
|
||||
enable CHAP authentication, add the following parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_chap_auth = true
|
||||
|
||||
To manually define the username and password, add the following parameters:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
chap_username = username
|
||||
chap_password = password
|
||||
|
||||
If the CHAP username or password are not defined, they will be
|
||||
auto-generated by the driver.
|
||||
|
||||
The CHAP parameters are ignored when using the FC protocol.
|
||||
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = infinidat-pool-a
|
||||
|
||||
[infinidat-pool-a]
|
||||
volume_driver = cinder.volume.drivers.infinidat.InfiniboxVolumeDriver
|
||||
volume_backend_name = infinidat-pool-a
|
||||
san_ip = 10.1.2.3
|
||||
san_login = openstackuser
|
||||
san_password = openstackpass
|
||||
san_thin_provision = true
|
||||
infinidat_pool_name = pool-a
|
||||
infinidat_storage_protocol = iscsi
|
||||
infinidat_iscsi_netspaces = default_iscsi_space
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific
|
||||
to the INFINIDAT driver.
|
||||
|
||||
.. include:: ../../tables/cinder-infinidat.rst
|
@ -0,0 +1,130 @@
|
||||
========================
|
||||
Infortrend volume driver
|
||||
========================
|
||||
|
||||
The `Infortrend <http://www.infortrend.com/global>`__ volume driver is a Block Storage driver
|
||||
providing iSCSI and Fibre Channel support for Infortrend storages.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Infortrend volume driver supports the following volume operations:
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create and delete a snapshot.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy an image to a volume.
|
||||
* Copy a volume to an image.
|
||||
* Clone a volume.
|
||||
* Extend a volume
|
||||
* Retype a volume.
|
||||
* Manage and unmanage a volume.
|
||||
* Migrate a volume with back-end assistance.
|
||||
* Live migrate an instance with volumes hosted on an Infortrend backend.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the Infortrend volume driver, the following settings are required:
|
||||
|
||||
Set up Infortrend storage
|
||||
-------------------------
|
||||
|
||||
* Create logical volumes in advance.
|
||||
* Host side setting ``Peripheral device type`` should be
|
||||
``No Device Present (Type=0x7f)``.
|
||||
|
||||
Set up cinder-volume node
|
||||
-------------------------
|
||||
|
||||
* Install Oracle Java 7 or later.
|
||||
|
||||
* Download the Infortrend storage CLI from the
|
||||
`release page <https://github.com/infortrend-openstack/infortrend-cinder-driver/releases>`__,
|
||||
and assign it to the default path ``/opt/bin/Infortrend/``.
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
On ``cinder-volume`` nodes, set the following in your
|
||||
``/etc/cinder/cinder.conf``, and use the following options to configure it:
|
||||
|
||||
Driver options
|
||||
--------------
|
||||
|
||||
.. include:: ../../tables/cinder-infortrend.rst
|
||||
|
||||
iSCSI configuration example
|
||||
---------------------------
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
default_volume_type = IFT-ISCSI
|
||||
enabled_backends = IFT-ISCSI
|
||||
|
||||
[IFT-ISCSI]
|
||||
volume_driver = cinder.volume.drivers.infortrend.infortrend_iscsi_cli.InfortrendCLIISCSIDriver
|
||||
volume_backend_name = IFT-ISCSI
|
||||
infortrend_pools_name = POOL-1,POOL-2
|
||||
san_ip = MANAGEMENT_PORT_IP
|
||||
infortrend_slots_a_channels_id = 0,1,2,3
|
||||
infortrend_slots_b_channels_id = 0,1,2,3
|
||||
|
||||
Fibre Channel configuration example
|
||||
-----------------------------------
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
default_volume_type = IFT-FC
|
||||
enabled_backends = IFT-FC
|
||||
|
||||
[IFT-FC]
|
||||
volume_driver = cinder.volume.drivers.infortrend.infortrend_fc_cli.InfortrendCLIFCDriver
|
||||
volume_backend_name = IFT-FC
|
||||
infortrend_pools_name = POOL-1,POOL-2,POOL-3
|
||||
san_ip = MANAGEMENT_PORT_IP
|
||||
infortrend_slots_a_channels_id = 4,5
|
||||
|
||||
Multipath configuration
|
||||
-----------------------
|
||||
|
||||
* Enable multipath for image transfer in ``/etc/cinder/cinder.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
use_multipath_for_image_xfer = True
|
||||
|
||||
Restart the ``cinder-volume`` service.
|
||||
|
||||
* Enable multipath for volume attach and detach in ``/etc/nova/nova.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
...
|
||||
volume_use_multipath = True
|
||||
...
|
||||
|
||||
Restart the ``nova-compute`` service.
|
||||
|
||||
Extra spec usage
|
||||
----------------
|
||||
|
||||
* ``infortrend:provisioning`` - Defaults to ``full`` provisioning,
|
||||
the valid values are thin and full.
|
||||
|
||||
* ``infortrend:tiering`` - Defaults to use ``all`` tiering,
|
||||
the valid values are subsets of 0, 1, 2, 3.
|
||||
|
||||
If multi-pools are configured in ``cinder.conf``,
|
||||
it can be specified for each pool, separated by semicolon.
|
||||
|
||||
For example:
|
||||
|
||||
``infortrend:provisioning``: ``POOL-1:thin; POOL-2:full``
|
||||
|
||||
``infortrend:tiering``: ``POOL-1:all; POOL-2:0; POOL-3:0,1,3``
|
||||
|
||||
For more details, see `Infortrend documents <http://www.infortrend.com/ImageLoader/LoadDoc/715>`_.
|
@ -0,0 +1,24 @@
|
||||
========================
|
||||
ITRI DISCO volume driver
|
||||
========================
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The DISCO driver supports the following features:
|
||||
|
||||
* Volume create and delete
|
||||
* Volume attach and detach
|
||||
* Snapshot create and delete
|
||||
* Create volume from snapshot
|
||||
* Get volume stats
|
||||
* Copy image to volume
|
||||
* Copy volume to image
|
||||
* Clone volume
|
||||
* Extend volume
|
||||
* Manage and unmanage volume
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: ../../tables/cinder-disco.rst
|
@ -0,0 +1,273 @@
|
||||
========================================================
|
||||
Kaminario K2 all-flash array iSCSI and FC volume drivers
|
||||
========================================================
|
||||
|
||||
Kaminario's K2 all-flash array leverages a unique software-defined
|
||||
architecture that delivers highly valued predictable performance, scalability
|
||||
and cost-efficiency.
|
||||
|
||||
Kaminario's K2 all-flash iSCSI and FC arrays can be used in
|
||||
OpenStack Block Storage for providing block storage using
|
||||
``KaminarioISCSIDriver`` class and ``KaminarioFCDriver`` class respectively.
|
||||
|
||||
This documentation explains how to configure and connect the block storage
|
||||
nodes to one or more K2 all-flash arrays.
|
||||
|
||||
Driver requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Kaminario's K2 all-flash iSCSI and/or FC array
|
||||
|
||||
- K2 REST API version >= 2.2.0
|
||||
|
||||
- K2 version 5.8 or later are supported
|
||||
|
||||
- ``krest`` python library(version 1.3.1 or later) should be installed on the
|
||||
Block Storage node using :command:`sudo pip install krest`
|
||||
|
||||
- The Block Storage Node should also have a data path to the K2 array
|
||||
for the following operations:
|
||||
|
||||
- Create a volume from snapshot
|
||||
- Clone a volume
|
||||
- Copy volume to image
|
||||
- Copy image to volume
|
||||
- Retype 'dedup without replication'<->'nodedup without replication'
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Retype a volume.
|
||||
- Manage and unmanage a volume.
|
||||
- Replicate volume with failover and failback support to K2 array.
|
||||
|
||||
Limitations and known issues
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If your OpenStack deployment is not setup to use multipath, the network
|
||||
connectivity of the K2 all-flash array will use a single physical port.
|
||||
|
||||
This may significantly limit the following benefits provided by K2:
|
||||
|
||||
- available bandwidth
|
||||
- high-availability
|
||||
- non disruptive-upgrade
|
||||
|
||||
The following steps are required to setup multipath access on the
|
||||
Compute and the Block Storage nodes
|
||||
|
||||
#. Install multipath software on both Compute and Block Storage nodes.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install sg3-utils multipath-tools
|
||||
|
||||
#. In the ``[libvirt]`` section of the ``nova.conf`` configuration file,
|
||||
specify ``iscsi_use_multipath=True``. This option is valid for both iSCSI
|
||||
and FC drivers.
|
||||
|
||||
Additional resources: Kaminario Host Configuration Guide
|
||||
for Linux (for configuring multipath)
|
||||
|
||||
#. Restart the compute service for the changes to take effect.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-compute restart
|
||||
|
||||
|
||||
Configure single Kaminario iSCSI/FC back end
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section details the steps required to configure the Kaminario
|
||||
Cinder Driver for single FC or iSCSI backend.
|
||||
|
||||
#. In the ``cinder.conf`` configuration file under the ``[DEFAULT]``
|
||||
section, set the ``scheduler_default_filters`` parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = DriverFilter,CapabilitiesFilter
|
||||
|
||||
See following links for more information:
|
||||
`<https://docs.openstack.org/developer/cinder/scheduler-filters.html>`_
|
||||
`<https://docs.openstack.org/admin-guide/blockstorage-driver-filter-weighing.html>`_
|
||||
|
||||
#. Under the ``[DEFAULT]`` section, set the enabled_backends parameter
|
||||
with the iSCSI or FC back-end group
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# For iSCSI
|
||||
enabled_backends = kaminario-iscsi-1
|
||||
|
||||
# For FC
|
||||
# enabled_backends = kaminario-fc-1
|
||||
|
||||
#. Add a back-end group section for back-end group specified
|
||||
in the enabled_backends parameter
|
||||
|
||||
#. In the newly created back-end group section, set the
|
||||
following configuration options:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[kaminario-iscsi-1]
|
||||
# Management IP of Kaminario K2 All-Flash iSCSI/FC array
|
||||
san_ip = 10.0.0.10
|
||||
# Management username of Kaminario K2 All-Flash iSCSI/FC array
|
||||
san_login = username
|
||||
# Management password of Kaminario K2 All-Flash iSCSI/FC array
|
||||
san_password = password
|
||||
# Enable Kaminario K2 iSCSI/FC driver
|
||||
volume_driver = cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
|
||||
# volume_driver = cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver
|
||||
|
||||
# Backend name
|
||||
# volume_backend_name = kaminario_fc_1
|
||||
volume_backend_name = kaminario_iscsi_1
|
||||
|
||||
# K2 driver calculates max_oversubscription_ratio on setting below
|
||||
# option as True. Default value is False
|
||||
# auto_calc_max_oversubscription_ratio = False
|
||||
|
||||
# Set a limit on total number of volumes to be created on K2 array, for example:
|
||||
# filter_function = "capabilities.total_volumes < 250"
|
||||
|
||||
# For replication, replication_device must be set and the replication peer must be configured
|
||||
# on the primary and the secondary K2 arrays
|
||||
# Syntax:
|
||||
# replication_device = backend_id:<s-array-ip>,login:<s-username>,password:<s-password>,rpo:<value>
|
||||
# where:
|
||||
# s-array-ip is the secondary K2 array IP
|
||||
# rpo must be either 60(1 min) or multiple of 300(5 min)
|
||||
# Example:
|
||||
# replication_device = backend_id:10.0.0.50,login:kaminario,password:kaminario,rpo:300
|
||||
|
||||
# Suppress requests library SSL certificate warnings on setting this option as True
|
||||
# Default value is 'False'
|
||||
# suppress_requests_ssl_warnings = False
|
||||
|
||||
#. Restart the Block Storage services for the changes to take effect:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-api restart
|
||||
# service cinder-scheduler restart
|
||||
# service cinder-volume restart
|
||||
|
||||
Setting multiple Kaminario iSCSI/FC back ends
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following steps are required to configure multiple K2 iSCSI/FC backends:
|
||||
|
||||
#. In the :file:`cinder.conf` file under the [DEFAULT] section,
|
||||
set the enabled_backends parameter with the comma-separated
|
||||
iSCSI/FC back-end groups.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = kaminario-iscsi-1, kaminario-iscsi-2, kaminario-iscsi-3
|
||||
|
||||
#. Add a back-end group section for each back-end group specified
|
||||
in the enabled_backends parameter
|
||||
|
||||
#. For each back-end group section, enter the configuration options as
|
||||
described in the above section
|
||||
``Configure single Kaminario iSCSI/FC back end``
|
||||
|
||||
See `Configure multiple-storage back ends
|
||||
<https://docs.openstack.org/admin-guide/blockstorage-multi-backend.html>`__
|
||||
for additional information.
|
||||
|
||||
#. Restart the cinder volume service for the changes to take effect.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-volume restart
|
||||
|
||||
Creating volume types
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create volume types for supporting volume creation on
|
||||
the multiple K2 iSCSI/FC backends.
|
||||
Set following extras-specs in the volume types:
|
||||
|
||||
- volume_backend_name : Set value of this spec according to the
|
||||
value of ``volume_backend_name`` in the back-end group sections.
|
||||
If only this spec is set, then dedup Kaminario cinder volumes will be
|
||||
created without replication support
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create kaminario_iscsi_dedup_noreplication
|
||||
$ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \
|
||||
kaminario_iscsi_dedup_noreplication
|
||||
|
||||
- kaminario:thin_prov_type : Set this spec in the volume type for creating
|
||||
nodedup Kaminario cinder volumes. If this spec is not set, dedup Kaminario
|
||||
cinder volumes will be created.
|
||||
|
||||
- kaminario:replication : Set this spec in the volume type for creating
|
||||
replication supported Kaminario cinder volumes. If this spec is not set,
|
||||
then Kaminario cinder volumes will be created without replication support.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create kaminario_iscsi_dedup_replication
|
||||
$ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \
|
||||
kaminario:replication=enabled kaminario_iscsi_dedup_replication
|
||||
|
||||
$ openstack volume type create kaminario_iscsi_nodedup_replication
|
||||
$ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \
|
||||
kaminario:replication=enabled kaminario:thin_prov_type=nodedup \
|
||||
kaminario_iscsi_nodedup_replication
|
||||
|
||||
$ openstack volume type create kaminario_iscsi_nodedup_noreplication
|
||||
$ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \
|
||||
kaminario:thin_prov_type=nodedup kaminario_iscsi_nodedup_noreplication
|
||||
|
||||
Supported retype cases
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
The following are the supported retypes for Kaminario cinder volumes:
|
||||
|
||||
- Nodedup-noreplication <--> Nodedup-replication
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder retype volume-id new-type
|
||||
|
||||
- Dedup-noreplication <--> Dedup-replication
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder retype volume-id new-type
|
||||
|
||||
- Dedup-noreplication <--> Nodedup-noreplication
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder retype --migration-policy on-demand volume-id new-type
|
||||
|
||||
For non-supported cases, try combinations of the
|
||||
:command:`cinder retype` command.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific
|
||||
to the Kaminario K2 FC and iSCSI Block Storage drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-kaminario.rst
|
@ -0,0 +1,159 @@
|
||||
======================================
|
||||
Lenovo Fibre Channel and iSCSI drivers
|
||||
======================================
|
||||
|
||||
The ``LenovoFCDriver`` and ``LenovoISCSIDriver`` Cinder drivers allow
|
||||
Lenovo S3200 or S2200 arrays to be used for block storage in OpenStack
|
||||
deployments.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the Lenovo drivers, the following are required:
|
||||
|
||||
- Lenovo S3200 or S2200 array with:
|
||||
|
||||
- iSCSI or FC host interfaces
|
||||
- G22x firmware or later
|
||||
|
||||
- Network connectivity between the OpenStack host and the array
|
||||
management interfaces
|
||||
|
||||
- HTTPS or HTTP must be enabled on the array
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Migrate a volume with back-end assistance.
|
||||
- Retype a volume.
|
||||
- Manage and unmanage a volume.
|
||||
|
||||
Configuring the array
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Verify that the array can be managed using an HTTPS connection. HTTP can
|
||||
also be used if ``lenovo_api_protocol=http`` is placed into the
|
||||
appropriate sections of the ``cinder.conf`` file.
|
||||
|
||||
Confirm that virtual pools A and B are present if you plan to use
|
||||
virtual pools for OpenStack storage.
|
||||
|
||||
#. Edit the ``cinder.conf`` file to define a storage back-end entry for
|
||||
each storage pool on the array that will be managed by OpenStack. Each
|
||||
entry consists of a unique section name, surrounded by square brackets,
|
||||
followed by options specified in ``key=value`` format.
|
||||
|
||||
- The ``lenovo_backend_name`` value specifies the name of the storage
|
||||
pool on the array.
|
||||
|
||||
- The ``volume_backend_name`` option value can be a unique value, if
|
||||
you wish to be able to assign volumes to a specific storage pool on
|
||||
the array, or a name that's shared among multiple storage pools to
|
||||
let the volume scheduler choose where new volumes are allocated.
|
||||
|
||||
- The rest of the options will be repeated for each storage pool in a
|
||||
given array: the appropriate Cinder driver name; IP address or
|
||||
host name of the array management interface; the username and password
|
||||
of an array user account with ``manage`` privileges; and the iSCSI IP
|
||||
addresses for the array if using the iSCSI transport protocol.
|
||||
|
||||
In the examples below, two back ends are defined, one for pool A and one
|
||||
for pool B, and a common ``volume_backend_name`` is used so that a
|
||||
single volume type definition can be used to allocate volumes from both
|
||||
pools.
|
||||
|
||||
**Example: iSCSI example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
lenovo_backend_name = A
|
||||
volume_backend_name = lenovo-array
|
||||
volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
[pool-b]
|
||||
lenovo_backend_name = B
|
||||
volume_backend_name = lenovo-array
|
||||
volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
**Example: Fibre Channel example back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
lenovo_backend_name = A
|
||||
volume_backend_name = lenovo-array
|
||||
volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
[pool-b]
|
||||
lenovo_backend_name = B
|
||||
volume_backend_name = lenovo-array
|
||||
volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
#. If HTTPS is not enabled in the array, include
|
||||
``lenovo_api_protocol = http`` in each of the back-end definitions.
|
||||
|
||||
#. If HTTPS is enabled, you can enable certificate verification with the
|
||||
option ``lenovo_verify_certificate=True``. You may also use the
|
||||
``lenovo_verify_certificate_path`` parameter to specify the path to a
|
||||
CA_BUNDLE file containing CAs other than those in the default list.
|
||||
|
||||
#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an
|
||||
``enabled_backends`` parameter specifying the back-end entries you added,
|
||||
and a ``default_volume_type`` parameter specifying the name of a volume
|
||||
type that you will create in the next step.
|
||||
|
||||
**Example: [DEFAULT] section changes**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = pool-a,pool-b
|
||||
default_volume_type = lenovo
|
||||
|
||||
#. Create a new volume type for each distinct ``volume_backend_name`` value
|
||||
that you added to the ``cinder.conf`` file. The example below
|
||||
assumes that the same ``volume_backend_name=lenovo-array``
|
||||
option was specified in all of the
|
||||
entries, and specifies that the volume type ``lenovo`` can be used to
|
||||
allocate volumes from any of them.
|
||||
|
||||
**Example: Creating a volume type**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create lenovo
|
||||
$ openstack volume type set --property volume_backend_name=lenovo-array lenovo
|
||||
|
||||
#. After modifying the ``cinder.conf`` file,
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific
|
||||
to the Lenovo drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-lenovo.rst
|
@ -0,0 +1,43 @@
|
||||
===
|
||||
LVM
|
||||
===
|
||||
|
||||
The default volume back end uses local volumes managed by LVM.
|
||||
|
||||
This driver supports different transport protocols to attach volumes,
|
||||
currently iSCSI and iSER.
|
||||
|
||||
Set the following in your ``cinder.conf`` configuration file, and use
|
||||
the following options to configure for iSCSI transport:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
iscsi_protocol = iscsi
|
||||
|
||||
Use the following options to configure for the iSER transport:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
iscsi_protocol = iser
|
||||
|
||||
.. include:: ../../tables/cinder-lvm.rst
|
||||
|
||||
.. caution::
|
||||
|
||||
When extending an existing volume which has a linked snapshot, the related
|
||||
logical volume is deactivated. This logical volume is automatically
|
||||
reactivated unless ``auto_activation_volume_list`` is defined in LVM
|
||||
configuration file ``lvm.conf``. See the ``lvm.conf`` file for more
|
||||
information.
|
||||
|
||||
If auto activated volumes are restricted, then include the cinder volume
|
||||
group into this list:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
auto_activation_volume_list = [ "existingVG", "cinder-volumes" ]
|
||||
|
||||
This note does not apply for thinly provisioned volumes
|
||||
because they do not need to be deactivated.
|
@ -0,0 +1,293 @@
|
||||
===========================
|
||||
NEC Storage M series driver
|
||||
===========================
|
||||
|
||||
NEC Storage M series are dual-controller disk arrays which support
|
||||
online maintenance.
|
||||
This driver supports both iSCSI and Fibre Channel.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
Supported models:
|
||||
|
||||
- NEC Storage M110, M310, M510 and M710 (SSD/HDD hybrid)
|
||||
- NEC Storage M310F and M710F (all flash)
|
||||
|
||||
Requirements:
|
||||
|
||||
- Storage control software (firmware) revision 0950 or later
|
||||
- NEC Storage DynamicDataReplication license
|
||||
- (Optional) NEC Storage IO Load Manager license for QoS
|
||||
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Get volume statistics.
|
||||
|
||||
|
||||
Preparation
|
||||
~~~~~~~~~~~
|
||||
|
||||
Below is minimum preparation to a disk array.
|
||||
For details of each command, see the NEC Storage Manager Command Reference
|
||||
(IS052).
|
||||
|
||||
- Common (iSCSI and Fibre Channel)
|
||||
|
||||
#. Initial setup
|
||||
|
||||
* Set IP addresses for management and BMC with the network configuration
|
||||
tool.
|
||||
* Enter license keys. (iSMcfg licenserelease)
|
||||
#. Create pools
|
||||
|
||||
* Create pools for volumes. (iSMcfg poolbind)
|
||||
* Create pools for snapshots. (iSMcfg poolbind)
|
||||
#. Create system volumes
|
||||
|
||||
* Create a Replication Reserved Volume (RSV) in one of pools.
|
||||
(iSMcfg ldbind)
|
||||
* Create Snapshot Reserve Areas (SRAs) in each snapshot pool.
|
||||
(iSMcfg srabind)
|
||||
#. (Optional) Register SSH public key
|
||||
|
||||
|
||||
- iSCSI only
|
||||
|
||||
#. Set IP addresses of each iSCSI port. (iSMcfg setiscsiport)
|
||||
#. Create a LD Set with setting multi-target mode on. (iSMcfg addldset)
|
||||
#. Register initiator names of each node. (iSMcfg addldsetinitiator)
|
||||
|
||||
|
||||
- Fibre Channel only
|
||||
|
||||
#. Start access control. (iSMcfg startacc)
|
||||
#. Create a LD Set. (iSMcfg addldset)
|
||||
#. Register WWPNs of each node. (iSMcfg addldsetpath)
|
||||
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
|
||||
Set the following in your ``cinder.conf``, and use the following options
|
||||
to configure it.
|
||||
|
||||
If you use Fibre Channel:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Storage1]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageFCDriver
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
If you use iSCSI:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Storage1]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
|
||||
.. end
|
||||
|
||||
Also, set ``volume_backend_name``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
volume_backend_name = Storage1
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
This table shows configuration options for NEC Storage M series driver.
|
||||
|
||||
.. include:: ../../tables/cinder-nec_m.rst
|
||||
|
||||
|
||||
|
||||
Required options
|
||||
----------------
|
||||
|
||||
|
||||
- ``nec_ismcli_fip``
|
||||
FIP address of M-Series Storage.
|
||||
|
||||
- ``nec_ismcli_user``
|
||||
User name for M-Series Storage iSMCLI.
|
||||
|
||||
- ``nec_ismcli_password``
|
||||
Password for M-Series Storage iSMCLI.
|
||||
|
||||
- ``nec_ismcli_privkey``
|
||||
RSA secret key file name for iSMCLI (for public key authentication only).
|
||||
Encrypted RSA secret key file cannot be specified.
|
||||
|
||||
- ``nec_diskarray_name``
|
||||
Diskarray name of M-Series Storage.
|
||||
This parameter must be specified to configure multiple groups
|
||||
(multi back end) by using the same storage device (storage
|
||||
device that has the same ``nec_ismcli_fip``). Specify the disk
|
||||
array name targeted by the relevant config-group for this
|
||||
parameter.
|
||||
|
||||
- ``nec_backup_pools``
|
||||
Specify a pool number where snapshots are created.
|
||||
|
||||
|
||||
Timeout configuration
|
||||
---------------------
|
||||
|
||||
|
||||
- ``rpc_response_timeout``
|
||||
Set the timeout value in seconds. If three or more volumes can be created
|
||||
at the same time, the reference value is 30 seconds multiplied by the
|
||||
number of volumes created at the same time.
|
||||
Also, Specify nova parameters below in ``nova.conf`` file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
block_device_allocate_retries = 120
|
||||
block_device_allocate_retries_interval = 10
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
- ``timeout server (HAProxy configuration)``
|
||||
In addition, you need to edit the following value in the HAProxy
|
||||
configuration file (``/etc/haproxy/haproxy.cfg``) in an environment where
|
||||
HAProxy is used.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
timeout server = 600 #Specify a value greater than rpc_response_timeout.
|
||||
|
||||
.. end
|
||||
|
||||
Run the :command:`service haproxy reload` command after editing the
|
||||
value to reload the HAProxy settings.
|
||||
|
||||
.. note::
|
||||
|
||||
The OpenStack environment set up using Red Hat OpenStack Platform
|
||||
Director may be set to use HAProxy.
|
||||
|
||||
|
||||
Configuration example for /etc/cinder/cinder.conf
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When using one config-group
|
||||
---------------------------
|
||||
|
||||
- When using ``nec_ismcli_password`` to authenticate iSMCLI
|
||||
(Password authentication):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = Storage1
|
||||
|
||||
[Storage1]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Storage1
|
||||
nec_ismcli_fip = 192.168.1.10
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_password = sys123
|
||||
nec_pools = 0
|
||||
nec_backup_pools = 1
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
- When using ``nec_ismcli_privkey`` to authenticate iSMCLI
|
||||
(Public key authentication):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = Storage1
|
||||
|
||||
[Storage1]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Storage1
|
||||
nec_ismcli_fip = 192.168.1.10
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_privkey = /etc/cinder/id_rsa
|
||||
nec_pools = 0
|
||||
nec_backup_pools = 1
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
When using multi config-group (multi-backend)
|
||||
---------------------------------------------
|
||||
|
||||
- Four config-groups (backends)
|
||||
|
||||
Storage1, Storage2, Storage3, Storage4
|
||||
|
||||
- Two disk arrays
|
||||
|
||||
200000255C3A21CC(192.168.1.10)
|
||||
Example for using config-group, Storage1 and Storage2
|
||||
|
||||
2000000991000316(192.168.1.20)
|
||||
Example for using config-group, Storage3 and Storage4
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = Storage1,Storage2,Storage3,Storage4
|
||||
|
||||
[Storage1]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Gold
|
||||
nec_ismcli_fip = 192.168.1.10
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_password = sys123
|
||||
nec_pools = 0
|
||||
nec_backup_pools = 2
|
||||
nec_diskarray_name = 200000255C3A21CC
|
||||
|
||||
[Storage2]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Silver
|
||||
nec_ismcli_fip = 192.168.1.10
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_password = sys123
|
||||
nec_pools = 1
|
||||
nec_backup_pools = 3
|
||||
nec_diskarray_name = 200000255C3A21CC
|
||||
|
||||
[Storage3]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Gold
|
||||
nec_ismcli_fip = 192.168.1.20
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_password = sys123
|
||||
nec_pools = 0
|
||||
nec_backup_pools = 2
|
||||
nec_diskarray_name = 2000000991000316
|
||||
|
||||
[Storage4]
|
||||
volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver
|
||||
volume_backend_name = Silver
|
||||
nec_ismcli_fip = 192.168.1.20
|
||||
nec_ismcli_user = sysadmin
|
||||
nec_ismcli_password = sys123
|
||||
nec_pools = 1
|
||||
nec_backup_pools = 3
|
||||
nec_diskarray_name = 2000000991000316
|
||||
|
||||
.. end
|
@ -0,0 +1,592 @@
|
||||
=====================
|
||||
NetApp unified driver
|
||||
=====================
|
||||
|
||||
The NetApp unified driver is a Block Storage driver that supports
|
||||
multiple storage families and protocols. A storage family corresponds to
|
||||
storage systems built on different NetApp technologies such as clustered
|
||||
Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage
|
||||
protocol refers to the protocol used to initiate data storage and access
|
||||
operations on those storage systems like iSCSI and NFS. The NetApp
|
||||
unified driver can be configured to provision and manage OpenStack
|
||||
volumes on a given storage family using a specified storage protocol.
|
||||
Also, the NetApp unified driver supports over subscription or over
|
||||
provisioning when thin provisioned Block Storage volumes are in use
|
||||
on an E-Series backend. The OpenStack volumes can then be used for
|
||||
accessing and storing data using the storage protocol on the storage
|
||||
family system. The NetApp unified driver is an extensible interface
|
||||
that can support new storage families and protocols.
|
||||
|
||||
.. important::
|
||||
|
||||
The NetApp unified driver in cinder currently provides integration for
|
||||
two major generations of the ONTAP operating system: the current
|
||||
clustered ONTAP and the legacy 7-mode. NetApp’s full support for
|
||||
7-mode ended in August of 2015 and the current limited support period
|
||||
will end in February of 2017.
|
||||
|
||||
The 7-mode components of the cinder NetApp unified driver have now been
|
||||
marked deprecated and will be removed in the Queens release. This will
|
||||
apply to all three protocols currently supported in this driver: iSCSI,
|
||||
FC and NFS.
|
||||
|
||||
.. note::
|
||||
|
||||
With the Juno release of OpenStack, Block Storage has
|
||||
introduced the concept of storage pools, in which a single
|
||||
Block Storage back end may present one or more logical
|
||||
storage resource pools from which Block Storage will
|
||||
select a storage location when provisioning volumes.
|
||||
|
||||
In releases prior to Juno, the NetApp unified driver contained some
|
||||
scheduling logic that determined which NetApp storage container
|
||||
(namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for
|
||||
E-Series) that a new Block Storage volume would be placed into.
|
||||
|
||||
With the introduction of pools, all scheduling logic is performed
|
||||
completely within the Block Storage scheduler, as each
|
||||
NetApp storage container is directly exposed to the Block
|
||||
Storage scheduler as a storage pool. Previously, the NetApp
|
||||
unified driver presented an aggregated view to the scheduler and
|
||||
made a final placement decision as to which NetApp storage container
|
||||
the Block Storage volume would be provisioned into.
|
||||
|
||||
NetApp clustered Data ONTAP storage family
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The NetApp clustered Data ONTAP storage family represents a
|
||||
configuration group which provides Compute instances access to
|
||||
clustered Data ONTAP storage systems. At present it can be configured in
|
||||
Block Storage to work with iSCSI and NFS storage protocols.
|
||||
|
||||
NetApp iSCSI configuration for clustered Data ONTAP
|
||||
---------------------------------------------------
|
||||
|
||||
The NetApp iSCSI configuration for clustered Data ONTAP is an interface
|
||||
from OpenStack to clustered Data ONTAP storage systems. It provisions
|
||||
and manages the SAN block storage entity, which is a NetApp LUN that
|
||||
can be accessed using the iSCSI protocol.
|
||||
|
||||
The iSCSI configuration for clustered Data ONTAP is a direct interface
|
||||
from Block Storage to the clustered Data ONTAP instance and as
|
||||
such does not require additional management software to achieve the
|
||||
desired functionality. It uses NetApp APIs to interact with the
|
||||
clustered Data ONTAP instance.
|
||||
|
||||
**Configuration options**
|
||||
|
||||
Configure the volume driver, storage family, and storage protocol to the
|
||||
NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by
|
||||
setting the ``volume_driver``, ``netapp_storage_family`` and
|
||||
``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_cluster
|
||||
netapp_storage_protocol = iscsi
|
||||
netapp_vserver = openstack-vserver
|
||||
netapp_server_hostname = myhostname
|
||||
netapp_server_port = port
|
||||
netapp_login = username
|
||||
netapp_password = password
|
||||
|
||||
.. note::
|
||||
|
||||
To use the iSCSI protocol, you must override the default value of
|
||||
``netapp_storage_protocol`` with ``iscsi``.
|
||||
|
||||
.. include:: ../../tables/cinder-netapp_cdot_iscsi.rst
|
||||
|
||||
.. note::
|
||||
|
||||
If you specify an account in the ``netapp_login`` that only has
|
||||
virtual storage server (Vserver) administration privileges (rather
|
||||
than cluster-wide administration privileges), some advanced features
|
||||
of the NetApp unified driver will not work and you may see warnings
|
||||
in the Block Storage logs.
|
||||
|
||||
.. note::
|
||||
|
||||
The driver supports iSCSI CHAP uni-directional authentication.
|
||||
To enable it, set the ``use_chap_auth`` option to ``True``.
|
||||
|
||||
.. tip::
|
||||
|
||||
For more information on these options and other deployment and
|
||||
operational scenarios, visit the `NetApp OpenStack Deployment and
|
||||
Operations
|
||||
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
||||
|
||||
NetApp NFS configuration for clustered Data ONTAP
|
||||
-------------------------------------------------
|
||||
|
||||
The NetApp NFS configuration for clustered Data ONTAP is an interface from
|
||||
OpenStack to a clustered Data ONTAP system for provisioning and managing
|
||||
OpenStack volumes on NFS exports provided by the clustered Data ONTAP system
|
||||
that are accessed using the NFS protocol.
|
||||
|
||||
The NFS configuration for clustered Data ONTAP is a direct interface from
|
||||
Block Storage to the clustered Data ONTAP instance and as such does
|
||||
not require any additional management software to achieve the desired
|
||||
functionality. It uses NetApp APIs to interact with the clustered Data ONTAP
|
||||
instance.
|
||||
|
||||
**Configuration options**
|
||||
|
||||
Configure the volume driver, storage family, and storage protocol to NetApp
|
||||
unified driver, clustered Data ONTAP, and NFS respectively by setting the
|
||||
``volume_driver``, ``netapp_storage_family``, and ``netapp_storage_protocol``
|
||||
options in the ``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_cluster
|
||||
netapp_storage_protocol = nfs
|
||||
netapp_vserver = openstack-vserver
|
||||
netapp_server_hostname = myhostname
|
||||
netapp_server_port = port
|
||||
netapp_login = username
|
||||
netapp_password = password
|
||||
nfs_shares_config = /etc/cinder/nfs_shares
|
||||
|
||||
.. include:: ../../tables/cinder-netapp_cdot_nfs.rst
|
||||
|
||||
.. note::
|
||||
|
||||
Additional NetApp NFS configuration options are shared with the
|
||||
generic NFS driver. These options can be found here:
|
||||
:ref:`cinder-storage_nfs`.
|
||||
|
||||
.. note::
|
||||
|
||||
If you specify an account in the ``netapp_login`` that only has
|
||||
virtual storage server (Vserver) administration privileges (rather
|
||||
than cluster-wide administration privileges), some advanced features
|
||||
of the NetApp unified driver will not work and you may see warnings
|
||||
in the Block Storage logs.
|
||||
|
||||
NetApp NFS Copy Offload client
|
||||
------------------------------
|
||||
|
||||
A feature was added in the Icehouse release of the NetApp unified driver that
|
||||
enables Image service images to be efficiently copied to a destination Block
|
||||
Storage volume. When the Block Storage and Image service are configured to use
|
||||
the NetApp NFS Copy Offload client, a controller-side copy will be attempted
|
||||
before reverting to downloading the image from the Image service. This improves
|
||||
image provisioning times while reducing the consumption of bandwidth and CPU
|
||||
cycles on the host(s) running the Image and Block Storage services. This is due
|
||||
to the copy operation being performed completely within the storage cluster.
|
||||
|
||||
The NetApp NFS Copy Offload client can be used in either of the following
|
||||
scenarios:
|
||||
|
||||
- The Image service is configured to store images in an NFS share that is
|
||||
exported from a NetApp FlexVol volume *and* the destination for the new Block
|
||||
Storage volume will be on an NFS share exported from a different FlexVol
|
||||
volume than the one used by the Image service. Both FlexVols must be located
|
||||
within the same cluster.
|
||||
|
||||
- The source image from the Image service has already been cached in an NFS
|
||||
image cache within a Block Storage back end. The cached image resides on a
|
||||
different FlexVol volume than the destination for the new Block Storage
|
||||
volume. Both FlexVols must be located within the same cluster.
|
||||
|
||||
To use this feature, you must configure the Image service, as follows:
|
||||
|
||||
- Set the ``default_store`` configuration option to ``file``.
|
||||
|
||||
- Set the ``filesystem_store_datadir`` configuration option to the path
|
||||
to the Image service NFS export.
|
||||
|
||||
- Set the ``show_image_direct_url`` configuration option to ``True``.
|
||||
|
||||
- Set the ``show_multiple_locations`` configuration option to ``True``.
|
||||
|
||||
- Set the ``filesystem_store_metadata_file`` configuration option to a metadata
|
||||
file. The metadata file should contain a JSON object that contains the
|
||||
correct information about the NFS export used by the Image service.
|
||||
|
||||
To use this feature, you must configure the Block Storage service, as follows:
|
||||
|
||||
- Set the ``netapp_copyoffload_tool_path`` configuration option to the path to
|
||||
the NetApp Copy Offload binary.
|
||||
|
||||
- Set the ``glance_api_version`` configuration option to ``2``.
|
||||
|
||||
.. important::
|
||||
|
||||
This feature requires that:
|
||||
|
||||
- The storage system must have Data ONTAP v8.2 or greater installed.
|
||||
|
||||
- The vStorage feature must be enabled on each storage virtual machine
|
||||
(SVM, also known as a Vserver) that is permitted to interact with the
|
||||
copy offload client.
|
||||
|
||||
- To configure the copy offload workflow, enable NFS v4.0 or greater and
|
||||
export it from the SVM.
|
||||
|
||||
.. tip::
|
||||
|
||||
To download the NetApp copy offload binary to be utilized in conjunction
|
||||
with the ``netapp_copyoffload_tool_path`` configuration option, please visit
|
||||
the Utility Toolchest page at the `NetApp Support portal
|
||||
<http://mysupport.netapp.com/NOW/download/tools/ntap_openstack_nfs/>`__
|
||||
(login is required).
|
||||
|
||||
.. tip::
|
||||
|
||||
For more information on these options and other deployment and operational
|
||||
scenarios, visit the `NetApp OpenStack Deployment and Operations Guide
|
||||
<http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
||||
|
||||
NetApp-supported extra specs for clustered Data ONTAP
|
||||
-----------------------------------------------------
|
||||
|
||||
Extra specs enable vendors to specify extra filter criteria.
|
||||
The Block Storage scheduler uses the specs when the scheduler determines
|
||||
which volume node should fulfill a volume provisioning request.
|
||||
When you use the NetApp unified driver with a clustered Data ONTAP
|
||||
storage system, you can leverage extra specs with Block Storage
|
||||
volume types to ensure that Block Storage volumes are created
|
||||
on storage back ends that have certain properties.
|
||||
An example of this is when you configure QoS, mirroring,
|
||||
or compression for a storage back end.
|
||||
|
||||
Extra specs are associated with Block Storage volume types.
|
||||
When users request volumes of a particular volume type, the volumes
|
||||
are created on storage back ends that meet the list of requirements.
|
||||
An example of this is the back ends that have the available space or
|
||||
extra specs. Use the specs in the following table to configure volumes.
|
||||
Define Block Storage volume types by using the :command:`openstack volume
|
||||
type set` command.
|
||||
|
||||
.. include:: ../../tables/manual/cinder-netapp_cdot_extraspecs.rst
|
||||
|
||||
|
||||
NetApp Data ONTAP operating in 7-Mode storage family
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The NetApp Data ONTAP operating in 7-Mode storage family represents a
|
||||
configuration group which provides Compute instances access to 7-Mode
|
||||
storage systems. At present it can be configured in Block Storage to
|
||||
work with iSCSI and NFS storage protocols.
|
||||
|
||||
NetApp iSCSI configuration for Data ONTAP operating in 7-Mode
|
||||
-------------------------------------------------------------
|
||||
|
||||
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an
|
||||
interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for
|
||||
provisioning and managing the SAN block storage entity, that is, a LUN which
|
||||
can be accessed using iSCSI protocol.
|
||||
|
||||
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct
|
||||
interface from OpenStack to Data ONTAP operating in 7-Mode storage system and
|
||||
it does not require additional management software to achieve the desired
|
||||
functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating
|
||||
in 7-Mode storage system.
|
||||
|
||||
**Configuration options**
|
||||
|
||||
Configure the volume driver, storage family and storage protocol to the NetApp
|
||||
unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by
|
||||
setting the ``volume_driver``, ``netapp_storage_family`` and
|
||||
``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_7mode
|
||||
netapp_storage_protocol = iscsi
|
||||
netapp_server_hostname = myhostname
|
||||
netapp_server_port = 80
|
||||
netapp_login = username
|
||||
netapp_password = password
|
||||
|
||||
.. note::
|
||||
|
||||
To use the iSCSI protocol, you must override the default value of
|
||||
``netapp_storage_protocol`` with ``iscsi``.
|
||||
|
||||
.. include:: ../../tables/cinder-netapp_7mode_iscsi.rst
|
||||
|
||||
.. note::
|
||||
|
||||
The driver supports iSCSI CHAP uni-directional authentication.
|
||||
To enable it, set the ``use_chap_auth`` option to ``True``.
|
||||
|
||||
.. tip::
|
||||
|
||||
For more information on these options and other deployment and
|
||||
operational scenarios, visit the `NetApp OpenStack Deployment and
|
||||
Operations
|
||||
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
||||
|
||||
NetApp NFS configuration for Data ONTAP operating in 7-Mode
|
||||
-----------------------------------------------------------
|
||||
|
||||
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface
|
||||
from OpenStack to Data ONTAP operating in 7-Mode storage system for
|
||||
provisioning and managing OpenStack volumes on NFS exports provided by the Data
|
||||
ONTAP operating in 7-Mode storage system which can then be accessed using NFS
|
||||
protocol.
|
||||
|
||||
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface
|
||||
from Block Storage to the Data ONTAP operating in 7-Mode instance and
|
||||
as such does not require any additional management software to achieve the
|
||||
desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP
|
||||
operating in 7-Mode storage system.
|
||||
|
||||
|
||||
.. important::
|
||||
Support for 7-mode configuration has been deprecated in the Ocata release
|
||||
and will be removed in the Queens release of OpenStack.
|
||||
|
||||
**Configuration options**
|
||||
|
||||
Configure the volume driver, storage family, and storage protocol to the NetApp
|
||||
unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting
|
||||
the ``volume_driver``, ``netapp_storage_family`` and
|
||||
``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_7mode
|
||||
netapp_storage_protocol = nfs
|
||||
netapp_server_hostname = myhostname
|
||||
netapp_server_port = 80
|
||||
netapp_login = username
|
||||
netapp_password = password
|
||||
nfs_shares_config = /etc/cinder/nfs_shares
|
||||
|
||||
.. include:: ../../tables/cinder-netapp_7mode_nfs.rst
|
||||
|
||||
.. note::
|
||||
|
||||
Additional NetApp NFS configuration options are shared with the
|
||||
generic NFS driver. For a description of these, see
|
||||
:ref:`cinder-storage_nfs`.
|
||||
|
||||
.. tip::
|
||||
|
||||
For more information on these options and other deployment and
|
||||
operational scenarios, visit the `NetApp OpenStack Deployment and
|
||||
Operations
|
||||
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
||||
|
||||
NetApp E-Series storage family
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The NetApp E-Series storage family represents a configuration group which
|
||||
provides OpenStack compute instances access to E-Series storage systems. At
|
||||
present it can be configured in Block Storage to work with the iSCSI
|
||||
storage protocol.
|
||||
|
||||
NetApp iSCSI configuration for E-Series
|
||||
---------------------------------------
|
||||
|
||||
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to
|
||||
E-Series storage systems. It provisions and manages the SAN block storage
|
||||
entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.
|
||||
|
||||
The iSCSI configuration for E-Series is an interface from Block
|
||||
Storage to the E-Series proxy instance and as such requires the deployment of
|
||||
the proxy instance in order to achieve the desired functionality. The driver
|
||||
uses REST APIs to interact with the E-Series proxy instance, which in turn
|
||||
interacts directly with the E-Series controllers.
|
||||
|
||||
The use of multipath and DM-MP are required when using the Block
|
||||
Storage driver for E-Series. In order for Block Storage and OpenStack
|
||||
Compute to take advantage of multiple paths, the following configuration
|
||||
options must be correctly configured:
|
||||
|
||||
- The ``use_multipath_for_image_xfer`` option should be set to ``True`` in the
|
||||
``cinder.conf`` file within the driver-specific stanza (for example,
|
||||
``[myDriver]``).
|
||||
|
||||
- The ``iscsi_use_multipath`` option should be set to ``True`` in the
|
||||
``nova.conf`` file within the ``[libvirt]`` stanza.
|
||||
|
||||
**Configuration options**
|
||||
|
||||
Configure the volume driver, storage family, and storage protocol to the
|
||||
NetApp unified driver, E-Series, and iSCSI respectively by setting the
|
||||
``volume_driver``, ``netapp_storage_family`` and
|
||||
``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = eseries
|
||||
netapp_storage_protocol = iscsi
|
||||
netapp_server_hostname = myhostname
|
||||
netapp_server_port = 80
|
||||
netapp_login = username
|
||||
netapp_password = password
|
||||
netapp_controller_ips = 1.2.3.4,5.6.7.8
|
||||
netapp_sa_password = arrayPassword
|
||||
netapp_storage_pools = pool1,pool2
|
||||
use_multipath_for_image_xfer = True
|
||||
|
||||
.. note::
|
||||
|
||||
To use the E-Series driver, you must override the default value of
|
||||
``netapp_storage_family`` with ``eseries``.
|
||||
|
||||
To use the iSCSI protocol, you must override the default value of
|
||||
``netapp_storage_protocol`` with ``iscsi``.
|
||||
|
||||
.. include:: ../../tables/cinder-netapp_eseries_iscsi.rst
|
||||
|
||||
.. tip::
|
||||
|
||||
For more information on these options and other deployment and
|
||||
operational scenarios, visit the `NetApp OpenStack Deployment and
|
||||
Operations
|
||||
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
||||
|
||||
NetApp-supported extra specs for E-Series
|
||||
-----------------------------------------
|
||||
|
||||
Extra specs enable vendors to specify extra filter criteria.
|
||||
The Block Storage scheduler uses the specs when the scheduler determines
|
||||
which volume node should fulfill a volume provisioning request.
|
||||
When you use the NetApp unified driver with an E-Series storage system,
|
||||
you can leverage extra specs with Block Storage volume types to ensure
|
||||
that Block Storage volumes are created on storage back ends that have
|
||||
certain properties. An example of this is when you configure thin
|
||||
provisioning for a storage back end.
|
||||
|
||||
Extra specs are associated with Block Storage volume types.
|
||||
When users request volumes of a particular volume type, the volumes are
|
||||
created on storage back ends that meet the list of requirements.
|
||||
An example of this is the back ends that have the available space or
|
||||
extra specs. Use the specs in the following table to configure volumes.
|
||||
Define Block Storage volume types by using the :command:`openstack volume
|
||||
type set` command.
|
||||
|
||||
.. list-table:: Description of extra specs options for NetApp Unified Driver with E-Series
|
||||
:header-rows: 1
|
||||
|
||||
* - Extra spec
|
||||
- Type
|
||||
- Description
|
||||
* - ``netapp_thin_provisioned``
|
||||
- Boolean
|
||||
- Limit the candidate volume list to only the ones that support thin
|
||||
provisioning on the storage controller.
|
||||
|
||||
Upgrading prior NetApp drivers to the NetApp unified driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NetApp introduced a new unified block storage driver in Havana for configuring
|
||||
different storage families and storage protocols. This requires defining an
|
||||
upgrade path for NetApp drivers which existed in releases prior to Havana. This
|
||||
section covers the upgrade configuration for NetApp drivers to the new unified
|
||||
configuration and a list of deprecated NetApp drivers.
|
||||
|
||||
Upgraded NetApp drivers
|
||||
-----------------------
|
||||
|
||||
This section describes how to update Block Storage configuration from
|
||||
a pre-Havana release to the unified driver format.
|
||||
|
||||
- NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
|
||||
|
||||
NetApp unified driver configuration:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_cluster
|
||||
netapp_storage_protocol = iscsi
|
||||
|
||||
- NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or
|
||||
earlier):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
|
||||
|
||||
NetApp unified driver configuration:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_cluster
|
||||
netapp_storage_protocol = nfs
|
||||
|
||||
- NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage
|
||||
controller in Grizzly (or earlier):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
|
||||
|
||||
NetApp unified driver configuration:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_7mode
|
||||
netapp_storage_protocol = iscsi
|
||||
|
||||
- NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage
|
||||
controller in Grizzly (or earlier):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
|
||||
|
||||
NetApp unified driver configuration:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
|
||||
netapp_storage_family = ontap_7mode
|
||||
netapp_storage_protocol = nfs
|
||||
|
||||
Deprecated NetApp drivers
|
||||
-------------------------
|
||||
|
||||
This section lists the NetApp drivers in earlier releases that are
|
||||
deprecated in Havana.
|
||||
|
||||
- NetApp iSCSI driver for clustered Data ONTAP:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
|
||||
|
||||
- NetApp NFS driver for clustered Data ONTAP:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
|
||||
|
||||
- NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage
|
||||
controller:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
|
||||
|
||||
- NetApp NFS driver for Data ONTAP operating in 7-Mode storage
|
||||
controller:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
|
||||
|
||||
.. note::
|
||||
|
||||
For support information on deprecated NetApp drivers in the Havana
|
||||
release, visit the `NetApp OpenStack Deployment and Operations
|
||||
Guide <http://netapp.github.io/openstack-deploy-ops-guide/>`__.
|
@ -0,0 +1,159 @@
|
||||
===============================
|
||||
NexentaEdge NBD & iSCSI drivers
|
||||
===============================
|
||||
|
||||
NexentaEdge is designed from the ground-up to deliver high performance Block
|
||||
and Object storage services and limitless scalability to next generation
|
||||
OpenStack clouds, petabyte scale active archives and Big Data applications.
|
||||
NexentaEdge runs on shared nothing clusters of industry standard Linux
|
||||
servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW)
|
||||
technology to break new ground in terms of reliability, functionality and cost
|
||||
efficiency.
|
||||
|
||||
For user documentation, see the
|
||||
`Nexenta Documentation Center <https://nexenta.com/products/documentation>`_.
|
||||
|
||||
|
||||
iSCSI driver
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The NexentaEdge cluster must be installed and configured according to the
|
||||
relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created,
|
||||
as well as an iSCSI service on the NexentaEdge gateway node.
|
||||
|
||||
The NexentaEdge iSCSI driver is selected using the normal procedures for one
|
||||
or multiple back-end volume drivers.
|
||||
|
||||
You must configure these items for each NexentaEdge cluster that the iSCSI
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on the storage node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta iSCSI driver
|
||||
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
|
||||
|
||||
# Specify the ip address for Rest API (string value)
|
||||
nexenta_rest_address = MANAGEMENT-NODE-IP
|
||||
|
||||
# Port for Rest API (integer value)
|
||||
nexenta_rest_port=8080
|
||||
|
||||
# Protocol used for Rest calls (string value, default=htpp)
|
||||
nexenta_rest_protocol = http
|
||||
|
||||
# Username for NexentaEdge Rest (string value)
|
||||
nexenta_user=USERNAME
|
||||
|
||||
# Password for NexentaEdge Rest (string value)
|
||||
nexenta_password=PASSWORD
|
||||
|
||||
# Path to bucket containing iSCSI LUNs (string value)
|
||||
nexenta_lun_container = CLUSTER/TENANT/BUCKET
|
||||
|
||||
# Name of pre-created iSCSI service (string value)
|
||||
nexenta_iscsi_service = SERVICE-NAME
|
||||
|
||||
# IP address of the gateway node attached to iSCSI service above or
|
||||
# virtual IP address if an iSCSI Storage Service Group is configured in
|
||||
# HA mode (string value)
|
||||
nexenta_client_address = GATEWAY-NODE-IP
|
||||
|
||||
|
||||
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
Supported operations
|
||||
--------------------
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
|
||||
NBD driver
|
||||
~~~~~~~~~~
|
||||
|
||||
As an alternative to using iSCSI, Amazon S3, or OpenStack Swift protocols,
|
||||
NexentaEdge can provide access to cluster storage via a Network Block Device
|
||||
(NBD) interface.
|
||||
|
||||
The NexentaEdge cluster must be installed and configured according to the
|
||||
relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created.
|
||||
The driver requires NexentaEdge Service to run on Hypervisor (Nova) node.
|
||||
The node must sit on Replicast Network and only runs NexentaEdge service, does
|
||||
not require physical disks.
|
||||
|
||||
You must configure these items for each NexentaEdge cluster that the NBD
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on storage node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta NBD driver
|
||||
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver
|
||||
|
||||
# Specify the ip address for Rest API (string value)
|
||||
nexenta_rest_address = MANAGEMENT-NODE-IP
|
||||
|
||||
# Port for Rest API (integer value)
|
||||
nexenta_rest_port = 8080
|
||||
|
||||
# Protocol used for Rest calls (string value, default=htpp)
|
||||
nexenta_rest_protocol = http
|
||||
|
||||
# Username for NexentaEdge Rest (string value)
|
||||
nexenta_rest_user = USERNAME
|
||||
|
||||
# Password for NexentaEdge Rest (string value)
|
||||
nexenta_rest_password = PASSWORD
|
||||
|
||||
# Path to bucket containing iSCSI LUNs (string value)
|
||||
nexenta_lun_container = CLUSTER/TENANT/BUCKET
|
||||
|
||||
# Path to directory to store symbolic links to block devices
|
||||
# (string value, default=/dev/disk/by-path)
|
||||
nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS
|
||||
|
||||
|
||||
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
Supported operations
|
||||
--------------------
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Nexenta Driver supports these options:
|
||||
|
||||
.. include:: ../../tables/cinder-nexenta_edge.rst
|
@ -0,0 +1,141 @@
|
||||
=====================================
|
||||
NexentaStor 4.x NFS and iSCSI drivers
|
||||
=====================================
|
||||
|
||||
NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS)
|
||||
platform delivering unified file (NFS and SMB) and block (FC and iSCSI)
|
||||
storage services, runs on industry standard hardware, scales from tens of
|
||||
terabytes to petabyte configurations, and includes all data management
|
||||
functionality by default.
|
||||
|
||||
For NexentaStor 4.x user documentation, visit
|
||||
https://nexenta.com/products/downloads/nexentastor.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Migrate a volume.
|
||||
|
||||
* Change volume type.
|
||||
|
||||
Nexenta iSCSI driver
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store
|
||||
Compute volumes. Every Compute volume is represented by a single zvol in a
|
||||
predefined Nexenta namespace. The Nexenta iSCSI volume driver should work with
|
||||
all versions of NexentaStor.
|
||||
|
||||
The NexentaStor appliance must be installed and configured according to the
|
||||
relevant Nexenta documentation. A volume and an enclosing namespace must be
|
||||
created for all iSCSI volumes to be accessed through the volume driver. This
|
||||
should be done as specified in the release-specific NexentaStor documentation.
|
||||
|
||||
The NexentaStor Appliance iSCSI driver is selected using the normal procedures
|
||||
for one or multiple backend volume drivers.
|
||||
|
||||
You must configure these items for each NexentaStor appliance that the iSCSI
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on the volume node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta iSCSI driver
|
||||
volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
|
||||
|
||||
# IP address of NexentaStor host (string value)
|
||||
nexenta_host=HOST-IP
|
||||
|
||||
# Username for NexentaStor REST (string value)
|
||||
nexenta_user=USERNAME
|
||||
|
||||
# Port for Rest API (integer value)
|
||||
nexenta_rest_port=8457
|
||||
|
||||
# Password for NexentaStor REST (string value)
|
||||
nexenta_password=PASSWORD
|
||||
|
||||
# Volume on NexentaStor appliance (string value)
|
||||
nexenta_volume=volume_name
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
nexenta_volume represents a zpool which is called volume on NS appliance. It must be pre-created before enabling the driver.
|
||||
|
||||
|
||||
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
|
||||
|
||||
Nexenta NFS driver
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
The Nexenta NFS driver allows you to use NexentaStor appliance to store
|
||||
Compute volumes via NFS. Every Compute volume is represented by a single
|
||||
NFS file within a shared directory.
|
||||
|
||||
While the NFS protocols standardize file access for users, they do not
|
||||
standardize administrative actions such as taking snapshots or replicating
|
||||
file systems. The OpenStack Volume Drivers bring a common interface to these
|
||||
operations. The Nexenta NFS driver implements these standard actions using
|
||||
the ZFS management plane that is already deployed on NexentaStor appliances.
|
||||
|
||||
The Nexenta NFS volume driver should work with all versions of NexentaStor.
|
||||
The NexentaStor appliance must be installed and configured according to the
|
||||
relevant Nexenta documentation. A single-parent file system must be created
|
||||
for all virtual disk directories supported for OpenStack. This directory must
|
||||
be created and exported on each NexentaStor appliance. This should be done as
|
||||
specified in the release- specific NexentaStor documentation.
|
||||
|
||||
You must configure these items for each NexentaStor appliance that the NFS
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on the volume node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta NFS driver
|
||||
volume_driver=cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
|
||||
|
||||
# Path to shares config file
|
||||
nexenta_shares_config=/home/ubuntu/shares.cfg
|
||||
|
||||
.. note::
|
||||
|
||||
Add your list of Nexenta NFS servers to the file you specified with the
|
||||
``nexenta_shares_config`` option. For example, this is how this file should look:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
192.168.1.200:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.200:8457
|
||||
192.168.1.201:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.201:8457
|
||||
192.168.1.202:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.202:8457
|
||||
|
||||
Each line in this file represents an NFS share. The first part of the line is
|
||||
the NFS share URL, the second line is the connection URL to the NexentaStor
|
||||
Appliance.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Nexenta Driver supports these options:
|
||||
|
||||
.. include:: ../../tables/cinder-nexenta.rst
|
@ -0,0 +1,153 @@
|
||||
=====================================
|
||||
NexentaStor 5.x NFS and iSCSI drivers
|
||||
=====================================
|
||||
|
||||
NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS)
|
||||
platform delivering unified file (NFS and SMB) and block (FC and iSCSI)
|
||||
storage services. NexentaStor runs on industry standard hardware, scales from
|
||||
tens of terabytes to petabyte configurations, and includes all data management
|
||||
functionality by default.
|
||||
|
||||
For user documentation, see the
|
||||
`Nexenta Documentation Center <https://nexenta.com/products/documentation>`__.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Migrate a volume.
|
||||
|
||||
* Change volume type.
|
||||
|
||||
iSCSI driver
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The NexentaStor appliance must be installed and configured according to the
|
||||
relevant Nexenta documentation. A pool and an enclosing namespace must be
|
||||
created for all iSCSI volumes to be accessed through the volume driver. This
|
||||
should be done as specified in the release-specific NexentaStor documentation.
|
||||
|
||||
The NexentaStor Appliance iSCSI driver is selected using the normal procedures
|
||||
for one or multiple back-end volume drivers.
|
||||
|
||||
|
||||
You must configure these items for each NexentaStor appliance that the iSCSI
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on the volume node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta iSCSI driver
|
||||
volume_driver=cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
|
||||
|
||||
# IP address of NexentaStor host (string value)
|
||||
nexenta_host=HOST-IP
|
||||
|
||||
# Port for Rest API (integer value)
|
||||
nexenta_rest_port=8080
|
||||
|
||||
# Username for NexentaStor Rest (string value)
|
||||
nexenta_user=USERNAME
|
||||
|
||||
# Password for NexentaStor Rest (string value)
|
||||
nexenta_password=PASSWORD
|
||||
|
||||
# Pool on NexentaStor appliance (string value)
|
||||
nexenta_volume=volume_name
|
||||
|
||||
# Name of a parent Volume group where cinder created zvols will reside (string value)
|
||||
nexenta_volume_group = iscsi
|
||||
|
||||
.. note::
|
||||
|
||||
nexenta_volume represents a zpool, which is called pool on NS 5.x appliance.
|
||||
It must be pre-created before enabling the driver.
|
||||
|
||||
Volume group does not need to be pre-created, the driver will create it if does not exist.
|
||||
|
||||
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
NFS driver
|
||||
~~~~~~~~~~
|
||||
The Nexenta NFS driver allows you to use NexentaStor appliance to store
|
||||
Compute volumes via NFS. Every Compute volume is represented by a single
|
||||
NFS file within a shared directory.
|
||||
|
||||
While the NFS protocols standardize file access for users, they do not
|
||||
standardize administrative actions such as taking snapshots or replicating
|
||||
file systems. The OpenStack Volume Drivers bring a common interface to these
|
||||
operations. The Nexenta NFS driver implements these standard actions using the
|
||||
ZFS management plane that already is deployed on NexentaStor appliances.
|
||||
|
||||
The NexentaStor appliance must be installed and configured according to the
|
||||
relevant Nexenta documentation. A single-parent file system must be created
|
||||
for all virtual disk directories supported for OpenStack.
|
||||
Create and export the directory on each NexentaStor appliance.
|
||||
|
||||
You must configure these items for each NexentaStor appliance that the NFS
|
||||
volume driver controls:
|
||||
|
||||
#. Make the following changes on the volume node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# Enable Nexenta NFS driver
|
||||
volume_driver=cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
|
||||
|
||||
# IP address or Hostname of NexentaStor host (string value)
|
||||
nas_host=HOST-IP
|
||||
|
||||
# Port for Rest API (integer value)
|
||||
nexenta_rest_port=8080
|
||||
|
||||
# Path to parent filesystem (string value)
|
||||
nas_share_path=POOL/FILESYSTEM
|
||||
|
||||
# Specify NFS version
|
||||
nas_mount_options=vers=4
|
||||
|
||||
#. Create filesystem on appliance and share via NFS. For example:
|
||||
|
||||
.. code-block:: vim
|
||||
|
||||
"securityContexts": [
|
||||
{"readWriteList": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
|
||||
"root": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
|
||||
"securityModes": ["sys"]}]
|
||||
|
||||
#. Create ACL for the filesystem. For example:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{"type": "allow",
|
||||
"principal": "everyone@",
|
||||
"permissions": ["list_directory","read_data","add_file","write_data",
|
||||
"add_subdirectory","append_data","read_xattr","write_xattr","execute",
|
||||
"delete_child","read_attributes","write_attributes","delete","read_acl",
|
||||
"write_acl","write_owner","synchronize"],
|
||||
"flags": ["file_inherit","dir_inherit"]}
|
||||
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Nexenta Driver supports these options:
|
||||
|
||||
.. include:: ../../tables/cinder-nexenta5.rst
|
@ -0,0 +1,157 @@
|
||||
==========
|
||||
NFS driver
|
||||
==========
|
||||
|
||||
The Network File System (NFS) is a distributed file system protocol
|
||||
originally developed by Sun Microsystems in 1984. An NFS server
|
||||
``exports`` one or more of its file systems, known as ``shares``.
|
||||
An NFS client can mount these exported shares on its own file system.
|
||||
You can perform file actions on this mounted remote file system as
|
||||
if the file system were local.
|
||||
|
||||
How the NFS driver works
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The NFS driver, and other drivers based on it, work quite differently
|
||||
than a traditional block storage driver.
|
||||
|
||||
The NFS driver does not actually allow an instance to access a storage
|
||||
device at the block level. Instead, files are created on an NFS share
|
||||
and mapped to instances, which emulates a block device.
|
||||
This works in a similar way to QEMU, which stores instances in the
|
||||
``/var/lib/nova/instances`` directory.
|
||||
|
||||
Enable the NFS driver and related options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use Cinder with the NFS driver, first set the ``volume_driver``
|
||||
in the ``cinder.conf`` configuration file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver=cinder.volume.drivers.nfs.NfsDriver
|
||||
|
||||
The following table contains the options supported by the NFS driver.
|
||||
|
||||
.. include:: ../../tables/cinder-storage_nfs.rst
|
||||
|
||||
.. note::
|
||||
|
||||
As of the Icehouse release, the NFS driver (and other drivers based
|
||||
off it) will attempt to mount shares using version 4.1 of the NFS
|
||||
protocol (including pNFS). If the mount attempt is unsuccessful due
|
||||
to a lack of client or server support, a subsequent mount attempt
|
||||
that requests the default behavior of the :command:`mount.nfs` command
|
||||
will be performed. On most distributions, the default behavior is to
|
||||
attempt mounting first with NFS v4.0, then silently fall back to NFS
|
||||
v3.0 if necessary. If the ``nfs_mount_options`` configuration option
|
||||
contains a request for a specific version of NFS to be used, or if
|
||||
specific options are specified in the shares configuration file
|
||||
specified by the ``nfs_shares_config`` configuration option, the
|
||||
mount will be attempted as requested with no subsequent attempts.
|
||||
|
||||
How to use the NFS driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Creating an NFS server is outside the scope of this document.
|
||||
|
||||
Configure with one NFS server
|
||||
-----------------------------
|
||||
|
||||
This example assumes access to the following NFS server and mount point:
|
||||
|
||||
* 192.168.1.200:/storage
|
||||
|
||||
This example demonstrates the usage of this driver with one NFS server.
|
||||
|
||||
Set the ``nas_host`` option to the IP address or host name of your NFS
|
||||
server, and the ``nas_share_path`` option to the NFS export path:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
nas_host = 192.168.1.200
|
||||
nas_share_path = /storage
|
||||
|
||||
Configure with multiple NFS servers
|
||||
-----------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
You can use the multiple NFS servers with `cinder multi back ends
|
||||
<https://wiki.openstack.org/wiki/Cinder-multi-backend>`_ feature.
|
||||
Configure the :ref:`enabled_backends <cinder-storage>` option with
|
||||
multiple values, and use the ``nas_host`` and ``nas_share`` options
|
||||
for each back end as described above.
|
||||
|
||||
The below example is another method to use multiple NFS servers,
|
||||
and demonstrates the usage of this driver with multiple NFS servers.
|
||||
Multiple servers are not required. One is usually enough.
|
||||
|
||||
This example assumes access to the following NFS servers and mount points:
|
||||
|
||||
* 192.168.1.200:/storage
|
||||
* 192.168.1.201:/storage
|
||||
* 192.168.1.202:/storage
|
||||
|
||||
#. Add your list of NFS servers to the file you specified with the
|
||||
``nfs_shares_config`` option. For example, if the value of this option
|
||||
was set to ``/etc/cinder/shares.txt`` file, then:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cat /etc/cinder/shares.txt
|
||||
192.168.1.200:/storage
|
||||
192.168.1.201:/storage
|
||||
192.168.1.202:/storage
|
||||
|
||||
Comments are allowed in this file. They begin with a ``#``.
|
||||
|
||||
#. Configure the ``nfs_mount_point_base`` option. This is a directory
|
||||
where ``cinder-volume`` mounts all NFS shares stored in the ``shares.txt``
|
||||
file. For this example, ``/var/lib/cinder/nfs`` is used. You can,
|
||||
of course, use the default value of ``$state_path/mnt``.
|
||||
|
||||
#. Start the ``cinder-volume`` service. ``/var/lib/cinder/nfs`` should
|
||||
now contain a directory for each NFS share specified in the ``shares.txt``
|
||||
file. The name of each directory is a hashed name:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ls /var/lib/cinder/nfs/
|
||||
...
|
||||
46c5db75dc3a3a50a10bfd1a456a9f3f
|
||||
...
|
||||
|
||||
#. You can now create volumes as you normally would:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 5 MYVOLUME
|
||||
# ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
|
||||
volume-a8862558-e6d6-4648-b5df-bb84f31c8935
|
||||
|
||||
This volume can also be attached and deleted just like other volumes.
|
||||
However, snapshotting is **not** supported.
|
||||
|
||||
NFS driver notes
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
* ``cinder-volume`` manages the mounting of the NFS shares as well as
|
||||
volume creation on the shares. Keep this in mind when planning your
|
||||
OpenStack architecture. If you have one master NFS server, it might
|
||||
make sense to only have one ``cinder-volume`` service to handle all
|
||||
requests to that NFS server. However, if that single server is unable
|
||||
to handle all requests, more than one ``cinder-volume`` service is
|
||||
needed as well as potentially more than one NFS server.
|
||||
|
||||
* Because data is stored in a file and not actually on a block storage
|
||||
device, you might not see the same IO performance as you would with
|
||||
a traditional block storage driver. Please test accordingly.
|
||||
|
||||
* Despite possible IO performance loss, having volume data stored in
|
||||
a file might be beneficial. For example, backing up volumes can be
|
||||
as easy as copying the volume files.
|
||||
|
||||
.. note::
|
||||
|
||||
Regular IO flushing and syncing still stands.
|
@ -0,0 +1,134 @@
|
||||
============================
|
||||
Nimble Storage volume driver
|
||||
============================
|
||||
|
||||
Nimble Storage fully integrates with the OpenStack platform through
|
||||
the Nimble Cinder driver, allowing a host to configure and manage Nimble
|
||||
Storage array features through Block Storage interfaces.
|
||||
|
||||
Support for iSCSI storage protocol is available with NimbleISCSIDriver
|
||||
Volume Driver class and Fibre Channel with NimbleFCDriver.
|
||||
|
||||
Support for the Liberty release and above is available from Nimble OS
|
||||
2.3.8 or later.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, clone, attach, and detach volumes
|
||||
* Create and delete volume snapshots
|
||||
* Create a volume from a snapshot
|
||||
* Copy an image to a volume
|
||||
* Copy a volume to an image
|
||||
* Extend a volume
|
||||
* Get volume statistics
|
||||
* Manage and unmanage a volume
|
||||
* Enable encryption and default performance policy for a volume-type
|
||||
extra-specs
|
||||
* Force backup of an in-use volume.
|
||||
|
||||
Nimble Storage driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Update the file ``/etc/cinder/cinder.conf`` with the given configuration.
|
||||
|
||||
In case of a basic (single back-end) configuration, add the parameters
|
||||
within the ``[default]`` section as follows.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
san_ip = NIMBLE_MGMT_IP
|
||||
san_login = NIMBLE_USER
|
||||
san_password = NIMBLE_PASSWORD
|
||||
use_multipath_for_image_xfer = True
|
||||
volume_driver = NIMBLE_VOLUME_DRIVER
|
||||
|
||||
In case of multiple back-end configuration, for example, configuration
|
||||
which supports multiple Nimble Storage arrays or a single Nimble Storage
|
||||
array with arrays from other vendors, use the following parameters.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
enabled_backends = Nimble-Cinder
|
||||
|
||||
[Nimble-Cinder]
|
||||
san_ip = NIMBLE_MGMT_IP
|
||||
san_login = NIMBLE_USER
|
||||
san_password = NIMBLE_PASSWORD
|
||||
use_multipath_for_image_xfer = True
|
||||
volume_driver = NIMBLE_VOLUME_DRIVER
|
||||
volume_backend_name = NIMBLE_BACKEND_NAME
|
||||
|
||||
In case of multiple back-end configuration, Nimble Storage volume type
|
||||
is created and associated with a back-end name as follows.
|
||||
|
||||
.. note::
|
||||
|
||||
Single back-end configuration users do not need to create the volume type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create NIMBLE_VOLUME_TYPE
|
||||
$ openstack volume type set --property volume_backend_name=NIMBLE_BACKEND_NAME NIMBLE_VOLUME_TYPE
|
||||
|
||||
This section explains the variables used above:
|
||||
|
||||
NIMBLE_MGMT_IP
|
||||
Management IP address of Nimble Storage array/group.
|
||||
|
||||
NIMBLE_USER
|
||||
Nimble Storage account login with minimum ``power user`` (admin) privilege
|
||||
if RBAC is used.
|
||||
|
||||
NIMBLE_PASSWORD
|
||||
Password of the admin account for nimble array.
|
||||
|
||||
NIMBLE_VOLUME_DRIVER
|
||||
Use either cinder.volume.drivers.nimble.NimbleISCSIDriver for iSCSI or
|
||||
cinder.volume.drivers.nimble.NimbleFCDriver for Fibre Channel.
|
||||
|
||||
NIMBLE_BACKEND_NAME
|
||||
A volume back-end name which is specified in the ``cinder.conf`` file.
|
||||
This is also used while assigning a back-end name to the Nimble volume type.
|
||||
|
||||
NIMBLE_VOLUME_TYPE
|
||||
The Nimble volume-type which is created from the CLI and associated with
|
||||
``NIMBLE_BACKEND_NAME``.
|
||||
|
||||
.. note::
|
||||
|
||||
Restart the ``cinder-api``, ``cinder-scheduler``, and ``cinder-volume``
|
||||
services after updating the ``cinder.conf`` file.
|
||||
|
||||
Nimble driver extra spec options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Nimble volume driver also supports the following extra spec options:
|
||||
|
||||
'nimble:encryption'='yes'
|
||||
Used to enable encryption for a volume-type.
|
||||
|
||||
'nimble:perfpol-name'=PERF_POL_NAME
|
||||
PERF_POL_NAME is the name of a performance policy which exists on the
|
||||
Nimble array and should be enabled for every volume in a volume type.
|
||||
|
||||
'nimble:multi-initiator'='true'
|
||||
Used to enable multi-initiator access for a volume-type.
|
||||
|
||||
These extra-specs can be enabled by using the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type set --property KEY=VALUE VOLUME_TYPE
|
||||
|
||||
``VOLUME_TYPE`` is the Nimble volume type and ``KEY`` and ``VALUE`` are
|
||||
the options mentioned above.
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Nimble storage driver supports these configuration options:
|
||||
|
||||
.. include:: ../../tables/cinder-nimble.rst
|
@ -0,0 +1,104 @@
|
||||
===========================================
|
||||
ProphetStor Fibre Channel and iSCSI drivers
|
||||
===========================================
|
||||
|
||||
ProhetStor Fibre Channel and iSCSI drivers add support for
|
||||
ProphetStor Flexvisor through the Block Storage service.
|
||||
ProphetStor Flexvisor enables commodity x86 hardware as software-defined
|
||||
storage leveraging well-proven ZFS for disk management to provide
|
||||
enterprise grade storage services such as snapshots, data protection
|
||||
with different RAID levels, replication, and deduplication.
|
||||
|
||||
The ``DPLFCDriver`` and ``DPLISCSIDriver`` drivers run volume operations
|
||||
by communicating with the ProphetStor storage system over HTTPS.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Clone a volume.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
Enable the Fibre Channel or iSCSI drivers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``DPLFCDriver`` and ``DPLISCSIDriver`` are installed with the OpenStack
|
||||
software.
|
||||
|
||||
#. Query storage pool id to configure ``dpl_pool`` of the ``cinder.conf``
|
||||
file.
|
||||
|
||||
a. Log on to the storage system with administrator access.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ssh root@STORAGE_IP_ADDRESS
|
||||
|
||||
b. View the current usable pool id.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ flvcli show pool list
|
||||
- d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07
|
||||
|
||||
c. Use ``d5bd40b58ea84e9da09dcf25a01fdc07`` to configure the ``dpl_pool`` of
|
||||
``/etc/cinder/cinder.conf`` file.
|
||||
|
||||
.. note::
|
||||
|
||||
Other management commands can be referenced with the help command
|
||||
:command:`flvcli -h`.
|
||||
|
||||
#. Make the following changes on the volume node ``/etc/cinder/cinder.conf``
|
||||
file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
# IP address of SAN controller (string value)
|
||||
san_ip=STORAGE IP ADDRESS
|
||||
|
||||
# Username for SAN controller (string value)
|
||||
san_login=USERNAME
|
||||
|
||||
# Password for SAN controller (string value)
|
||||
san_password=PASSWORD
|
||||
|
||||
# Use thin provisioning for SAN volumes? (boolean value)
|
||||
san_thin_provision=true
|
||||
|
||||
# The port that the iSCSI daemon is listening on. (integer value)
|
||||
iscsi_port=3260
|
||||
|
||||
# DPL pool uuid in which DPL volumes are stored. (string value)
|
||||
dpl_pool=d5bd40b58ea84e9da09dcf25a01fdc07
|
||||
|
||||
# DPL port number. (integer value)
|
||||
dpl_port=8357
|
||||
|
||||
# Uncomment one of the next two option to enable Fibre channel or iSCSI
|
||||
# FIBRE CHANNEL(uncomment the next line to enable the FC driver)
|
||||
#volume_driver=cinder.volume.drivers.prophetstor.dpl_fc.DPLFCDriver
|
||||
# iSCSI (uncomment the next line to enable the iSCSI driver)
|
||||
#volume_driver=cinder.volume.drivers.prophetstor.dpl_iscsi.DPLISCSIDriver
|
||||
|
||||
#. Save the changes to the ``/etc/cinder/cinder.conf`` file and
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your
|
||||
OpenStack system. If you experience problems, review the Block Storage
|
||||
service log files for errors.
|
||||
|
||||
The following table contains the options supported by the ProphetStor
|
||||
storage driver.
|
||||
|
||||
.. include:: ../../tables/cinder-prophetstor_dpl.rst
|
@ -0,0 +1,319 @@
|
||||
===================================================
|
||||
Pure Storage iSCSI and Fibre Channel volume drivers
|
||||
===================================================
|
||||
|
||||
The Pure Storage FlashArray volume drivers for OpenStack Block Storage
|
||||
interact with configured Pure Storage arrays and support various
|
||||
operations.
|
||||
|
||||
Support for iSCSI storage protocol is available with the PureISCSIDriver
|
||||
Volume Driver class, and Fibre Channel with PureFCDriver.
|
||||
|
||||
All drivers are compatible with Purity FlashArrays that support the REST
|
||||
API version 1.2, 1.3, or 1.4 (Purity 4.0.0 and newer).
|
||||
|
||||
Limitations and known issues
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you do not set up the nodes hosting instances to use multipathing,
|
||||
all network connectivity will use a single physical port on the array.
|
||||
In addition to significantly limiting the available bandwidth, this
|
||||
means you do not have the high-availability and non-disruptive upgrade
|
||||
benefits provided by FlashArray. Multipathing must be used to take advantage
|
||||
of these benefits.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, detach, retype, clone, and extend volumes.
|
||||
|
||||
* Create a volume from snapshot.
|
||||
|
||||
* Create, list, and delete volume snapshots.
|
||||
|
||||
* Create, list, update, and delete consistency groups.
|
||||
|
||||
* Create, list, and delete consistency group snapshots.
|
||||
|
||||
* Manage and unmanage a volume.
|
||||
|
||||
* Manage and unmanage a snapshot.
|
||||
|
||||
* Get volume statistics.
|
||||
|
||||
* Create a thin provisioned volume.
|
||||
|
||||
* Replicate volumes to remote Pure Storage array(s).
|
||||
|
||||
Configure OpenStack and Purity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You need to configure both your Purity array and your OpenStack cluster.
|
||||
|
||||
.. note::
|
||||
|
||||
These instructions assume that the ``cinder-api`` and ``cinder-scheduler``
|
||||
services are installed and configured in your OpenStack cluster.
|
||||
|
||||
Configure the OpenStack Block Storage service
|
||||
---------------------------------------------
|
||||
|
||||
In these steps, you will edit the ``cinder.conf`` file to configure the
|
||||
OpenStack Block Storage service to enable multipathing and to use the
|
||||
Pure Storage FlashArray as back-end storage.
|
||||
|
||||
#. Install Pure Storage PyPI module.
|
||||
A requirement for the Pure Storage driver is the installation of the
|
||||
Pure Storage Python SDK version 1.4.0 or later from PyPI.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install purestorage
|
||||
|
||||
#. Retrieve an API token from Purity.
|
||||
The OpenStack Block Storage service configuration requires an API token
|
||||
from Purity. Actions performed by the volume driver use this token for
|
||||
authorization. Also, Purity logs the volume driver's actions as being
|
||||
performed by the user who owns this API token.
|
||||
|
||||
If you created a Purity user account that is dedicated to managing your
|
||||
OpenStack Block Storage volumes, copy the API token from that user
|
||||
account.
|
||||
|
||||
Use the appropriate create or list command below to display and copy the
|
||||
Purity API token:
|
||||
|
||||
* To create a new API token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pureadmin create --api-token USER
|
||||
|
||||
The following is an example output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pureadmin create --api-token pureuser
|
||||
Name API Token Created
|
||||
pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30
|
||||
|
||||
* To list an existing API token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pureadmin list --api-token --expose USER
|
||||
|
||||
The following is an example output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pureadmin list --api-token --expose pureuser
|
||||
Name API Token Created
|
||||
pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30
|
||||
|
||||
#. Copy the API token retrieved (``902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9`` from
|
||||
the examples above) to use in the next step.
|
||||
|
||||
#. Edit the OpenStack Block Storage service configuration file.
|
||||
The following sample ``/etc/cinder/cinder.conf`` configuration lists the
|
||||
relevant settings for a typical Block Storage service using a single
|
||||
Pure Storage array:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = puredriver-1
|
||||
default_volume_type = puredriver-1
|
||||
|
||||
[puredriver-1]
|
||||
volume_backend_name = puredriver-1
|
||||
volume_driver = PURE_VOLUME_DRIVER
|
||||
san_ip = IP_PURE_MGMT
|
||||
pure_api_token = PURE_API_TOKEN
|
||||
use_multipath_for_image_xfer = True
|
||||
|
||||
Replace the following variables accordingly:
|
||||
|
||||
PURE_VOLUME_DRIVER
|
||||
Use either ``cinder.volume.drivers.pure.PureISCSIDriver`` for iSCSI or
|
||||
``cinder.volume.drivers.pure.PureFCDriver`` for Fibre Channel
|
||||
connectivity.
|
||||
|
||||
IP_PURE_MGMT
|
||||
The IP address of the Pure Storage array's management interface or a
|
||||
domain name that resolves to that IP address.
|
||||
|
||||
PURE_API_TOKEN
|
||||
The Purity Authorization token that the volume driver uses to
|
||||
perform volume management on the Pure Storage array.
|
||||
|
||||
.. note::
|
||||
|
||||
The volume driver automatically creates Purity host objects for
|
||||
initiators as needed. If CHAP authentication is enabled via the
|
||||
``use_chap_auth`` setting, you must ensure there are no manually
|
||||
created host objects with IQN's that will be used by the OpenStack
|
||||
Block Storage service. The driver will only modify credentials on hosts that
|
||||
it manages.
|
||||
|
||||
.. note::
|
||||
|
||||
If using the PureFCDriver it is recommended to use the OpenStack
|
||||
Block Storage Fibre Channel Zone Manager.
|
||||
|
||||
Volume auto-eradication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable auto-eradication of deleted volumes, snapshots, and consistency
|
||||
groups on deletion, modify the following option in the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
pure_eradicate_on_delete = true
|
||||
|
||||
By default, auto-eradication is disabled and all deleted volumes, snapshots,
|
||||
and consistency groups are retained on the Pure Storage array in a recoverable
|
||||
state for 24 hours from time of deletion.
|
||||
|
||||
SSL certification
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable SSL certificate validation, modify the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
driver_ssl_cert_verify = true
|
||||
|
||||
By default, SSL certificate validation is disabled.
|
||||
|
||||
To specify a non-default path to ``CA_Bundle`` file or directory with
|
||||
certificates of trusted CAs:
|
||||
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
driver_ssl_cert_path = Certificate path
|
||||
|
||||
.. note::
|
||||
|
||||
This requires the use of Pure Storage Python SDK > 1.4.0.
|
||||
|
||||
Replication configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Add the following to the back-end specification to specify another Flash
|
||||
Array to replicate to:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[puredriver-1]
|
||||
replication_device = backend_id:PURE2_NAME,san_ip:IP_PURE2_MGMT,api_token:PURE2_API_TOKEN
|
||||
|
||||
Where ``PURE2_NAME`` is the name of the remote Pure Storage system,
|
||||
``IP_PURE2_MGMT`` is the management IP address of the remote array,
|
||||
and ``PURE2_API_TOKEN`` is the Purity Authorization token
|
||||
of the remote array.
|
||||
|
||||
Note that more than one ``replication_device`` line can be added to allow for
|
||||
multi-target device replication.
|
||||
|
||||
A volume is only replicated if the volume is of a volume-type that has
|
||||
the extra spec ``replication_enabled`` set to ``<is> True``.
|
||||
|
||||
To create a volume type that specifies replication to remote back ends:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create ReplicationType
|
||||
$ openstack volume type set --property replication_enabled='<is> True' ReplicationType
|
||||
|
||||
The following table contains the optional configuration parameters available
|
||||
for replication configuration with the Pure Storage array.
|
||||
|
||||
==================================================== ============= ======
|
||||
Option Description Default
|
||||
==================================================== ============= ======
|
||||
``pure_replica_interval_default`` Snapshot
|
||||
replication
|
||||
interval in
|
||||
seconds. ``900``
|
||||
``pure_replica_retention_short_term_default`` Retain all
|
||||
snapshots on
|
||||
target for
|
||||
this time
|
||||
(in seconds). ``14400``
|
||||
``pure_replica_retention_long_term_per_day_default`` Retain how
|
||||
many
|
||||
snapshots
|
||||
for each
|
||||
day. ``3``
|
||||
``pure_replica_retention_long_term_default`` Retain
|
||||
snapshots
|
||||
per day
|
||||
on target
|
||||
for this
|
||||
time (in
|
||||
days). ``7``
|
||||
==================================================== ============= ======
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
``replication-failover`` is only supported from the primary array to any of the
|
||||
multiple secondary arrays, but subsequent ``replication-failover`` is only
|
||||
supported back to the original primary array.
|
||||
|
||||
Automatic thin-provisioning/oversubscription ratio
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable this feature where we calculate the array oversubscription ratio as
|
||||
(total provisioned/actual used), add the following option in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[puredriver-1]
|
||||
pure_automatic_max_oversubscription_ratio = True
|
||||
|
||||
By default, this is disabled and we honor the hard-coded configuration option
|
||||
``max_over_subscription_ratio``.
|
||||
|
||||
.. note::
|
||||
|
||||
Arrays with very good data reduction rates (compression/data deduplication/thin provisioning)
|
||||
can get *very* large oversubscription rates applied.
|
||||
|
||||
Scheduling metrics
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A large number of metrics are reported by the volume driver which can be useful
|
||||
in implementing more control over volume placement in multi-backend
|
||||
environments using the driver filter and weighter methods.
|
||||
|
||||
Metrics reported include, but are not limited to:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
total_capacity_gb
|
||||
free_capacity_gb
|
||||
provisioned_capacity
|
||||
total_volumes
|
||||
total_snapshots
|
||||
total_hosts
|
||||
total_pgroups
|
||||
writes_per_sec
|
||||
reads_per_sec
|
||||
input_per_sec
|
||||
output_per_sec
|
||||
usec_per_read_op
|
||||
usec_per_read_op
|
||||
queue_depth
|
||||
|
||||
.. note::
|
||||
|
||||
All total metrics include non-OpenStack managed objects on the array.
|
||||
|
||||
In conjunction with QOS extra-specs, you can create very complex algorithms to
|
||||
manage volume placement. More detailed documentation on this is available in
|
||||
other external documentation.
|
@ -0,0 +1,61 @@
|
||||
==============
|
||||
Quobyte driver
|
||||
==============
|
||||
|
||||
The `Quobyte <http://www.quobyte.com/>`__ volume driver enables storing Block
|
||||
Storage service volumes on a Quobyte storage back end. Block Storage service
|
||||
back ends are mapped to Quobyte volumes and individual Block Storage service
|
||||
volumes are stored as files on a Quobyte volume. Selection of the appropriate
|
||||
Quobyte volume is done by the aforementioned back end configuration that
|
||||
specifies the Quobyte volume explicitly.
|
||||
|
||||
.. note::
|
||||
|
||||
Note the dual use of the term ``volume`` in the context of Block Storage
|
||||
service volumes and in the context of Quobyte volumes.
|
||||
|
||||
For more information see `the Quobyte support webpage
|
||||
<http://support.quobyte.com/>`__.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Quobyte volume driver supports the following volume operations:
|
||||
|
||||
- Create, delete, attach, and detach volumes
|
||||
|
||||
- Secure NAS operation (Starting with Mitaka release secure NAS operation is
|
||||
optional but still default)
|
||||
|
||||
- Create and delete a snapshot
|
||||
|
||||
- Create a volume from a snapshot
|
||||
|
||||
- Extend a volume
|
||||
|
||||
- Clone a volume
|
||||
|
||||
- Copy a volume to image
|
||||
|
||||
- Generic volume migration (no back end optimization)
|
||||
|
||||
.. note::
|
||||
|
||||
When running VM instances off Quobyte volumes, ensure that the `Quobyte
|
||||
Compute service driver <https://wiki.openstack.org/wiki/Nova/Quobyte>`__
|
||||
has been configured in your OpenStack cloud.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
To activate the Quobyte volume driver, configure the corresponding
|
||||
``volume_driver`` parameter:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Quobyte driver:
|
||||
|
||||
.. include:: ../../tables/cinder-quobyte.rst
|
@ -0,0 +1,68 @@
|
||||
===================
|
||||
Scality SOFS driver
|
||||
===================
|
||||
|
||||
The Scality SOFS volume driver interacts with configured sfused mounts.
|
||||
|
||||
The Scality SOFS driver manages volumes as sparse files stored on a
|
||||
Scality Ring through sfused. Ring connection settings and sfused options
|
||||
are defined in the ``cinder.conf`` file and the configuration file
|
||||
pointed to by the ``scality_sofs_config`` option, typically
|
||||
``/etc/sfused.conf``.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Scality SOFS volume driver provides the following Block Storage
|
||||
volume operations:
|
||||
|
||||
- Create, delete, attach (map), and detach (unmap) volumes.
|
||||
|
||||
- Create, list, and delete volume snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
- Backup a volume.
|
||||
|
||||
- Restore backup to new or existing volume.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the ``cinder.conf``
|
||||
configuration file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = scality-1
|
||||
|
||||
[scality-1]
|
||||
volume_driver = cinder.volume.drivers.scality.ScalityDriver
|
||||
volume_backend_name = scality-1
|
||||
|
||||
scality_sofs_config = /etc/sfused.conf
|
||||
scality_sofs_mount_point = /cinder
|
||||
scality_sofs_volume_dir = cinder/volumes
|
||||
|
||||
Compute configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following instructions to update the ``nova.conf`` configuration
|
||||
file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
scality_sofs_mount_point = /cinder
|
||||
scality_sofs_config = /etc/sfused.conf
|
||||
|
||||
.. include:: ../../tables/cinder-scality.rst
|
@ -0,0 +1,48 @@
|
||||
===============
|
||||
Sheepdog driver
|
||||
===============
|
||||
|
||||
Sheepdog is an open-source distributed storage system that provides a
|
||||
virtual storage pool utilizing internal disk of commodity servers.
|
||||
|
||||
Sheepdog scales to several hundred nodes, and has powerful virtual disk
|
||||
management features like snapshotting, cloning, rollback, and thin
|
||||
provisioning.
|
||||
|
||||
More information can be found on `Sheepdog
|
||||
Project <http://sheepdog.github.io/sheepdog/>`__.
|
||||
|
||||
This driver enables the use of Sheepdog through Qemu/KVM.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sheepdog driver supports these operations:
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
|
||||
- Create, list, and delete volume snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Set the following option in the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver
|
||||
|
||||
The following table contains the configuration options supported by the
|
||||
Sheepdog driver:
|
||||
|
||||
.. include:: ../../tables/cinder-sheepdog.rst
|
@ -0,0 +1,17 @@
|
||||
==============
|
||||
SambaFS driver
|
||||
==============
|
||||
|
||||
There is a volume back-end for Samba filesystems. Set the following in
|
||||
your ``cinder.conf`` file, and use the following options to configure it.
|
||||
|
||||
.. note::
|
||||
|
||||
The SambaFS driver requires ``qemu-img`` version 1.7 or higher on Linux
|
||||
nodes, and ``qemu-img`` version 1.6 or higher on Windows nodes.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
|
||||
|
||||
.. include:: ../../tables/cinder-smbfs.rst
|
@ -0,0 +1,104 @@
|
||||
=========
|
||||
SolidFire
|
||||
=========
|
||||
|
||||
The SolidFire Cluster is a high performance all SSD iSCSI storage device that
|
||||
provides massive scale out capability and extreme fault tolerance. A key
|
||||
feature of the SolidFire cluster is the ability to set and modify during
|
||||
operation specific QoS levels on a volume for volume basis. The SolidFire
|
||||
cluster offers this along with de-duplication, compression, and an architecture
|
||||
that takes full advantage of SSDs.
|
||||
|
||||
To configure the use of a SolidFire cluster with Block Storage, modify your
|
||||
``cinder.conf`` file as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
|
||||
san_ip = 172.17.1.182 # the address of your MVIP
|
||||
san_login = sfadmin # your cluster admin login
|
||||
san_password = sfpassword # your cluster admin password
|
||||
sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
|
||||
|
||||
.. warning::
|
||||
|
||||
Older versions of the SolidFire driver (prior to Icehouse) created a unique
|
||||
account prefixed with ``$cinder-volume-service-hostname-$tenant-id`` on the
|
||||
SolidFire cluster for each tenant. Unfortunately, this account formation
|
||||
resulted in issues for High Availability (HA) installations and
|
||||
installations where the ``cinder-volume`` service can move to a new node.
|
||||
The current default implementation does not experience this issue as no
|
||||
prefix is used. For installations created on a prior release, the OLD
|
||||
default behavior can be configured by using the keyword ``hostname`` in
|
||||
sf_account_prefix.
|
||||
|
||||
.. note::
|
||||
|
||||
The SolidFire driver creates names for volumes on the back end using the
|
||||
format UUID-<cinder-id>. This works well, but there is a possibility of a
|
||||
UUID collision for customers running multiple clouds against the same
|
||||
cluster. In Mitaka the ability was added to eliminate the possibility of
|
||||
collisions by introducing the **sf_volume_prefix** configuration variable.
|
||||
On the SolidFire cluster each volume will be labeled with the prefix,
|
||||
providing the ability to configure unique volume names for each cloud.
|
||||
The default prefix is 'UUID-'.
|
||||
|
||||
Changing the setting on an existing deployment will result in the existing
|
||||
volumes being inaccessible. To introduce this change to an existing
|
||||
deployment it is recommended to add the Cluster as if it were a second
|
||||
backend and disable new deployments to the current back end.
|
||||
|
||||
.. include:: ../../tables/cinder-solidfire.rst
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, attach, and detach volumes.
|
||||
* Create, list, and delete volume snapshots.
|
||||
* Create a volume from a snapshot.
|
||||
* Copy an image to a volume.
|
||||
* Copy a volume to an image.
|
||||
* Clone a volume.
|
||||
* Extend a volume.
|
||||
* Retype a volume.
|
||||
* Manage and unmanage a volume.
|
||||
* Consistency group snapshots.
|
||||
|
||||
QoS support for the SolidFire drivers includes the ability to set the
|
||||
following capabilities in the OpenStack Block Storage API
|
||||
``cinder.api.contrib.qos_specs_manage`` qos specs extension module:
|
||||
|
||||
* **minIOPS** - The minimum number of IOPS guaranteed for this volume.
|
||||
Default = 100.
|
||||
|
||||
* **maxIOPS** - The maximum number of IOPS allowed for this volume.
|
||||
Default = 15,000.
|
||||
|
||||
* **burstIOPS** - The maximum number of IOPS allowed over a short period of
|
||||
time. Default = 15,000.
|
||||
|
||||
* **scaledIOPS** - The presence of this key is a flag indicating that the
|
||||
above IOPS should be scaled by the following scale values. It is recommended
|
||||
to set the value of scaledIOPS to True, but any value will work. The
|
||||
absence of this key implies false.
|
||||
|
||||
* **scaleMin** - The amount to scale the minIOPS by for every 1GB of
|
||||
additional volume size. The value must be an integer.
|
||||
|
||||
* **scaleMax** - The amount to scale the maxIOPS by for every 1GB of additional
|
||||
volume size. The value must be an integer.
|
||||
|
||||
* **scaleBurst** - The amount to scale the burstIOPS by for every 1GB of
|
||||
additional volume size. The value must be an integer.
|
||||
|
||||
The QoS keys above no longer require to be scoped but must be created and
|
||||
associated to a volume type. For information about how to set the key-value
|
||||
pairs and associate them with a volume type, see the `volume qos
|
||||
<https://docs.openstack.org/developer/python-openstackclient/command-objects/volume-qos.html>`_
|
||||
section in the OpenStackClient command list.
|
||||
|
||||
.. note::
|
||||
|
||||
When using scaledIOPS, the scale values must be chosen such that the
|
||||
constraint minIOPS <= maxIOPS <= burstIOPS is always true. The driver will
|
||||
enforce this constraint.
|
124
doc/source/config-reference/block-storage/drivers/synology-dsm-driver.rst
Executable file
124
doc/source/config-reference/block-storage/drivers/synology-dsm-driver.rst
Executable file
@ -0,0 +1,124 @@
|
||||
==========================
|
||||
Synology DSM volume driver
|
||||
==========================
|
||||
|
||||
The ``SynoISCSIDriver`` volume driver allows Synology NAS to be used for Block
|
||||
Storage (cinder) in OpenStack deployments. Information on OpenStack Block
|
||||
Storage volumes is available in the DSM Storage Manager.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Synology driver has the following requirements:
|
||||
|
||||
* DSM version 6.0.2 or later.
|
||||
|
||||
* Your Synology NAS model must support advanced file LUN, iSCSI Target, and
|
||||
snapshot features. Refer to the `Support List for applied models
|
||||
<https://www.synology.com/en-global/dsm/6.0/iSCSI_virtualization#OpenStack>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
The DSM driver is available in the OpenStack Newton release.
|
||||
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Create, delete, clone, attach, and detach volumes.
|
||||
|
||||
* Create and delete volume snapshots.
|
||||
|
||||
* Create a volume from a snapshot.
|
||||
|
||||
* Copy an image to a volume.
|
||||
|
||||
* Copy a volume to an image.
|
||||
|
||||
* Extend a volume.
|
||||
|
||||
* Get volume statistics.
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Edit the ``/etc/cinder/cinder.conf`` file on your volume driver host.
|
||||
|
||||
Synology driver uses a volume in Synology NAS as the back end of Block Storage.
|
||||
Every time you create a new Block Storage volume, the system will create an
|
||||
advanced file LUN in your Synology volume to be used for this new Block Storage
|
||||
volume.
|
||||
|
||||
The following example shows how to use different Synology NAS servers as the
|
||||
back end. If you want to use all volumes on your Synology NAS, add another
|
||||
section with the volume number to differentiate between volumes within the same
|
||||
Synology NAS.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others
|
||||
|
||||
[ds1515pV1]
|
||||
# configuration for volume 1 in DS1515+
|
||||
|
||||
[ds1515pV2]
|
||||
# configuration for volume 2 in DS1515+
|
||||
|
||||
[rs3017xsV1]
|
||||
# configuration for volume 1 in RS3017xs
|
||||
|
||||
Each section indicates the volume number and the way in which the connection is
|
||||
established. Below is an example of a basic configuration:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[Your_Section_Name]
|
||||
|
||||
# Required settings
|
||||
volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
|
||||
iscs_protocol = iscsi
|
||||
iscsi_ip_address = DS_IP
|
||||
synology_admin_port = DS_PORT
|
||||
synology_username = DS_USER
|
||||
synology_password = DS_PW
|
||||
synology_pool_name = DS_VOLUME
|
||||
|
||||
# Optional settings
|
||||
volume_backend_name = VOLUME_BACKEND_NAME
|
||||
iscsi_secondary_ip_addresses = IP_ADDRESSES
|
||||
driver_use_ssl = True
|
||||
use_chap_auth = True
|
||||
chap_username = CHAP_USER_NAME
|
||||
chap_password = CHAP_PASSWORD
|
||||
|
||||
``DS_PORT``
|
||||
This is the port for DSM management. The default value for DSM is 5000
|
||||
(HTTP) and 5001 (HTTPS). To use HTTPS connections, you must set
|
||||
``driver_use_ssl = True``.
|
||||
|
||||
``DS_IP``
|
||||
This is the IP address of your Synology NAS.
|
||||
|
||||
``DS_USER``
|
||||
This is the account of any DSM administrator.
|
||||
|
||||
``DS_PW``
|
||||
This is the password for ``DS_USER``.
|
||||
|
||||
``DS_VOLUME``
|
||||
This is the volume you want to use as the storage pool for the Block
|
||||
Storage service. The format is ``volume[0-9]+``, and the number is the same
|
||||
as the volume number in DSM.
|
||||
|
||||
.. note::
|
||||
|
||||
If you set ``driver_use_ssl`` as ``True``, ``synology_admin_port`` must be
|
||||
an HTTPS port.
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Synology DSM driver supports the following configuration options:
|
||||
|
||||
.. include:: ../../tables/cinder-synology.rst
|
@ -0,0 +1,81 @@
|
||||
======
|
||||
Tintri
|
||||
======
|
||||
|
||||
Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and
|
||||
virtualization. The Tintri Block Storage driver interacts with configured
|
||||
VMstore running Tintri OS 4.0 and above. It supports various operations using
|
||||
Tintri REST APIs and NFS protocol.
|
||||
|
||||
To configure the use of a Tintri VMstore with Block Storage, perform the
|
||||
following actions:
|
||||
|
||||
#. Edit the ``etc/cinder/cinder.conf`` file and set the
|
||||
``cinder.volume.drivers.tintri`` options:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver=cinder.volume.drivers.tintri.TintriDriver
|
||||
# Mount options passed to the nfs client. See section of the
|
||||
# nfs man page for details. (string value)
|
||||
nfs_mount_options = vers=3,lookupcache=pos
|
||||
|
||||
#
|
||||
# Options defined in cinder.volume.drivers.tintri
|
||||
#
|
||||
|
||||
# The hostname (or IP address) for the storage system (string
|
||||
# value)
|
||||
tintri_server_hostname = {Tintri VMstore Management IP}
|
||||
|
||||
# User name for the storage system (string value)
|
||||
tintri_server_username = {username}
|
||||
|
||||
# Password for the storage system (string value)
|
||||
tintri_server_password = {password}
|
||||
|
||||
# API version for the storage system (string value)
|
||||
# tintri_api_version = v310
|
||||
|
||||
# Following options needed for NFS configuration
|
||||
# File with the list of available nfs shares (string value)
|
||||
# nfs_shares_config = /etc/cinder/nfs_shares
|
||||
|
||||
# Tintri driver will clean up unused image snapshots. With the following
|
||||
# option, users can configure how long unused image snapshots are
|
||||
# retained. Default retention policy is 30 days
|
||||
# tintri_image_cache_expiry_days = 30
|
||||
|
||||
# Path to NFS shares file storing images.
|
||||
# Users can store Glance images in the NFS share of the same VMstore
|
||||
# mentioned in the following file. These images need to have additional
|
||||
# metadata ``provider_location`` configured in Glance, which should point
|
||||
# to the NFS share path of the image.
|
||||
# This option will enable Tintri driver to directly clone from Glance
|
||||
# image stored on same VMstore (rather than downloading image
|
||||
# from Glance)
|
||||
# tintri_image_shares_config = <Path to image NFS share>
|
||||
#
|
||||
# For example:
|
||||
# Glance image metadata
|
||||
# provider_location =>
|
||||
# nfs://<data_ip>/tintri/glance/84829294-c48b-4e16-a878-8b2581efd505
|
||||
|
||||
#. Edit the ``/etc/nova/nova.conf`` file and set the ``nfs_mount_options``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[libvirt]
|
||||
nfs_mount_options = vers=3
|
||||
|
||||
#. Edit the ``/etc/cinder/nfs_shares`` file and add the Tintri VMstore mount
|
||||
points associated with the configured VMstore management IP in the
|
||||
``cinder.conf`` file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
{vmstore_data_ip}:/tintri/{submount1}
|
||||
{vmstore_data_ip}:/tintri/{submount2}
|
||||
|
||||
|
||||
.. include:: ../../tables/cinder-tintri.rst
|
@ -0,0 +1,107 @@
|
||||
===========================================
|
||||
Violin Memory 7000 Series FSP volume driver
|
||||
===========================================
|
||||
|
||||
The OpenStack V7000 driver package from Violin Memory adds Block Storage
|
||||
service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP
|
||||
controllers.
|
||||
|
||||
The driver package release can be used with any OpenStack Liberty deployment
|
||||
for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later
|
||||
using Fibre Channel HBAs.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the Violin driver, the following are required:
|
||||
|
||||
- Violin 7300/7700 series FSP with:
|
||||
|
||||
- Concerto OS version 7.5.3 or later
|
||||
|
||||
- Fibre channel host interfaces
|
||||
|
||||
- The Violin block storage driver: This driver implements the block storage API
|
||||
calls. The driver is included with the OpenStack Liberty release.
|
||||
|
||||
- The vmemclient library: This is the Violin Array Communications library to
|
||||
the Flash Storage Platform through a REST-like interface. The client can be
|
||||
installed using the python 'pip' installer tool. Further information on
|
||||
vmemclient can be found on `PyPI
|
||||
<https://pypi.python.org/pypi/vmemclient/>`__.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
pip install vmemclient
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
|
||||
- Create, list, and delete volume snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
.. note::
|
||||
|
||||
Listed operations are supported for thick, thin, and dedup luns,
|
||||
with the exception of cloning. Cloning operations are supported only
|
||||
on thick luns.
|
||||
|
||||
Driver configuration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once the array is configured as per the installation guide, it is simply a
|
||||
matter of editing the cinder configuration file to add or modify the
|
||||
parameters. The driver currently only supports fibre channel configuration.
|
||||
|
||||
Fibre channel configuration
|
||||
---------------------------
|
||||
|
||||
Set the following in your ``cinder.conf`` configuration file, replacing the
|
||||
variables using the guide in the following section:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
|
||||
volume_backend_name = vmem_violinfsp
|
||||
extra_capabilities = VMEM_CAPABILITIES
|
||||
san_ip = VMEM_MGMT_IP
|
||||
san_login = VMEM_USER_NAME
|
||||
san_password = VMEM_PASSWORD
|
||||
use_multipath_for_image_xfer = true
|
||||
|
||||
Configuration parameters
|
||||
------------------------
|
||||
|
||||
Description of configuration value placeholders:
|
||||
|
||||
VMEM_CAPABILITIES
|
||||
User defined capabilities, a JSON formatted string specifying key-value
|
||||
pairs (string value). The ones particularly supported are
|
||||
``dedup`` and ``thin``. Only these two capabilities are listed here in
|
||||
``cinder.conf`` file, indicating this backend be selected for creating
|
||||
luns which have a volume type associated with them that have ``dedup``
|
||||
or ``thin`` extra_specs specified. For example, if the FSP is configured
|
||||
to support dedup luns, set the associated driver capabilities
|
||||
to: {"dedup":"True","thin":"True"}.
|
||||
|
||||
VMEM_MGMT_IP
|
||||
External IP address or host name of the Violin 7300 Memory Gateway. This
|
||||
can be an IP address or host name.
|
||||
|
||||
VMEM_USER_NAME
|
||||
Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller.
|
||||
This user must have administrative rights on the array or controller.
|
||||
|
||||
VMEM_PASSWORD
|
||||
Log-in user's password.
|
@ -0,0 +1,347 @@
|
||||
.. _block_storage_vmdk_driver:
|
||||
|
||||
==================
|
||||
VMware VMDK driver
|
||||
==================
|
||||
|
||||
Use the VMware VMDK driver to enable management of the OpenStack Block Storage
|
||||
volumes on vCenter-managed data stores. Volumes are backed by VMDK files on
|
||||
data stores that use any VMware-compatible storage technology such as NFS,
|
||||
iSCSI, FiberChannel, and vSAN.
|
||||
|
||||
.. note::
|
||||
|
||||
The VMware VMDK driver requires vCenter version 5.1 at minimum.
|
||||
|
||||
Functional context
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The VMware VMDK driver connects to vCenter, through which it can dynamically
|
||||
access all the data stores visible from the ESX hosts in the managed cluster.
|
||||
|
||||
When you create a volume, the VMDK driver creates a VMDK file on demand. The
|
||||
VMDK file creation completes only when the volume is subsequently attached to
|
||||
an instance. The reason for this requirement is that data stores visible to the
|
||||
instance determine where to place the volume. Before the service creates the
|
||||
VMDK file, attach a volume to the target instance.
|
||||
|
||||
The running vSphere VM is automatically reconfigured to attach the VMDK file as
|
||||
an extra disk. Once attached, you can log in to the running vSphere VM to
|
||||
rescan and discover this extra disk.
|
||||
|
||||
With the update to ESX version 6.0, the VMDK driver now supports NFS version
|
||||
4.1.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The recommended volume driver for OpenStack Block Storage is the VMware vCenter
|
||||
VMDK driver. When you configure the driver, you must match it with the
|
||||
appropriate OpenStack Compute driver from VMware and both drivers must point to
|
||||
the same server.
|
||||
|
||||
In the ``nova.conf`` file, use this option to define the Compute driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
compute_driver = vmwareapi.VMwareVCDriver
|
||||
|
||||
In the ``cinder.conf`` file, use this option to define the volume
|
||||
driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
|
||||
|
||||
The following table lists various options that the drivers support for the
|
||||
OpenStack Block Storage configuration (``cinder.conf``):
|
||||
|
||||
.. include:: ../../tables/cinder-vmware.rst
|
||||
|
||||
VMDK disk type
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The VMware VMDK drivers support the creation of VMDK disk file types ``thin``,
|
||||
``lazyZeroedThick`` (sometimes called thick or flat), or ``eagerZeroedThick``.
|
||||
|
||||
A thin virtual disk is allocated and zeroed on demand as the space is used.
|
||||
Unused space on a Thin disk is available to other users.
|
||||
|
||||
A lazy zeroed thick virtual disk will have all space allocated at disk
|
||||
creation. This reserves the entire disk space, so it is not available to other
|
||||
users at any time.
|
||||
|
||||
An eager zeroed thick virtual disk is similar to a lazy zeroed thick disk, in
|
||||
that the entire disk is allocated at creation. However, in this type, any
|
||||
previous data will be wiped clean on the disk before the write. This can mean
|
||||
that the disk will take longer to create, but can also prevent issues with
|
||||
stale data on physical media.
|
||||
|
||||
Use the ``vmware:vmdk_type`` extra spec key with the appropriate value to
|
||||
specify the VMDK disk file type. This table shows the mapping between the extra
|
||||
spec entry and the VMDK disk file type:
|
||||
|
||||
.. list-table:: Extra spec entry to VMDK disk file type mapping
|
||||
:header-rows: 1
|
||||
|
||||
* - Disk file type
|
||||
- Extra spec key
|
||||
- Extra spec value
|
||||
* - thin
|
||||
- ``vmware:vmdk_type``
|
||||
- ``thin``
|
||||
* - lazyZeroedThick
|
||||
- ``vmware:vmdk_type``
|
||||
- ``thick``
|
||||
* - eagerZeroedThick
|
||||
- ``vmware:vmdk_type``
|
||||
- ``eagerZeroedThick``
|
||||
|
||||
If you do not specify a ``vmdk_type`` extra spec entry, the disk file type will
|
||||
default to ``thin``.
|
||||
|
||||
The following example shows how to create a ``lazyZeroedThick`` VMDK volume by
|
||||
using the appropriate ``vmdk_type``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create THICK_VOLUME
|
||||
$ openstack volume type set --property vmware:vmdk_type=thick THICK_VOLUME
|
||||
$ openstack volume create --size 1 --type THICK_VOLUME VOLUME1
|
||||
|
||||
Clone type
|
||||
~~~~~~~~~~
|
||||
|
||||
With the VMware VMDK drivers, you can create a volume from another
|
||||
source volume or a snapshot point. The VMware vCenter VMDK driver
|
||||
supports the ``full`` and ``linked/fast`` clone types. Use the
|
||||
``vmware:clone_type`` extra spec key to specify the clone type. The
|
||||
following table captures the mapping for clone types:
|
||||
|
||||
.. list-table:: Extra spec entry to clone type mapping
|
||||
:header-rows: 1
|
||||
|
||||
* - Clone type
|
||||
- Extra spec key
|
||||
- Extra spec value
|
||||
* - full
|
||||
- ``vmware:clone_type``
|
||||
- ``full``
|
||||
* - linked/fast
|
||||
- ``vmware:clone_type``
|
||||
- ``linked``
|
||||
|
||||
If you do not specify the clone type, the default is ``full``.
|
||||
|
||||
The following example shows linked cloning from a source volume, which is
|
||||
created from an image:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create FAST_CLONE
|
||||
$ openstack volume type set --property vmware:clone_type=linked FAST_CLONE
|
||||
$ openstack volume create --size 1 --type FAST_CLONE --image MYIMAGE SOURCE_VOL
|
||||
$ openstack volume create --size 1 --source SOURCE_VOL DEST_VOL
|
||||
|
||||
Use vCenter storage policies to specify back-end data stores
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to configure back-end data stores using storage
|
||||
policies. In vCenter 5.5 and greater, you can create one or more storage
|
||||
policies and expose them as a Block Storage volume-type to a vmdk volume. The
|
||||
storage policies are exposed to the vmdk driver through the extra spec property
|
||||
with the ``vmware:storage_profile`` key.
|
||||
|
||||
For example, assume a storage policy in vCenter named ``gold_policy.`` and a
|
||||
Block Storage volume type named ``vol1`` with the extra spec key
|
||||
``vmware:storage_profile`` set to the value ``gold_policy``. Any Block Storage
|
||||
volume creation that uses the ``vol1`` volume type places the volume only in
|
||||
data stores that match the ``gold_policy`` storage policy.
|
||||
|
||||
The Block Storage back-end configuration for vSphere data stores is
|
||||
automatically determined based on the vCenter configuration. If you configure a
|
||||
connection to connect to vCenter version 5.5 or later in the ``cinder.conf``
|
||||
file, the use of storage policies to configure back-end data stores is
|
||||
automatically supported.
|
||||
|
||||
.. note::
|
||||
|
||||
You must configure any data stores that you configure for the Block
|
||||
Storage service for the Compute service.
|
||||
|
||||
**To configure back-end data stores by using storage policies**
|
||||
|
||||
#. In vCenter, tag the data stores to be used for the back end.
|
||||
|
||||
OpenStack also supports policies that are created by using vendor-specific
|
||||
capabilities; for example vSAN-specific storage policies.
|
||||
|
||||
.. note::
|
||||
|
||||
The tag value serves as the policy. For details, see :ref:`vmware-spbm`.
|
||||
|
||||
#. Set the extra spec key ``vmware:storage_profile`` in the desired Block
|
||||
Storage volume types to the policy name that you created in the previous
|
||||
step.
|
||||
|
||||
#. Optionally, for the ``vmware_host_version`` parameter, enter the version
|
||||
number of your vSphere platform. For example, ``5.5``.
|
||||
|
||||
This setting overrides the default location for the corresponding WSDL file.
|
||||
Among other scenarios, you can use this setting to prevent WSDL error
|
||||
messages during the development phase or to work with a newer version of
|
||||
vCenter.
|
||||
|
||||
#. Complete the other vCenter configuration parameters as appropriate.
|
||||
|
||||
.. note::
|
||||
|
||||
Any volume that is created without an associated policy (that is to say,
|
||||
without an associated volume type that specifies ``vmware:storage_profile``
|
||||
extra spec), there is no policy-based placement for that volume.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The VMware vCenter VMDK driver supports these operations:
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
|
||||
.. note::
|
||||
|
||||
When a volume is attached to an instance, a reconfigure operation is
|
||||
performed on the instance to add the volume's VMDK to it. The user must
|
||||
manually rescan and mount the device from within the guest operating
|
||||
system.
|
||||
|
||||
- Create, list, and delete volume snapshots.
|
||||
|
||||
.. note::
|
||||
|
||||
Allowed only if volume is not attached to an instance.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
.. note::
|
||||
|
||||
The vmdk UUID in vCenter will not be set to the volume UUID if the
|
||||
vCenter version is 6.0 or above and the extra spec key ``vmware:clone_type``
|
||||
in the destination volume type is set to ``linked``.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
.. note::
|
||||
|
||||
Only images in ``vmdk`` disk format with ``bare`` container format are
|
||||
supported. The ``vmware_disktype`` property of the image can be
|
||||
``preallocated``, ``sparse``, ``streamOptimized`` or ``thin``.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
.. note::
|
||||
|
||||
- Allowed only if the volume is not attached to an instance.
|
||||
- This operation creates a ``streamOptimized`` disk image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
.. note::
|
||||
|
||||
- Supported only if the source volume is not attached to an instance.
|
||||
- The vmdk UUID in vCenter will not be set to the volume UUID if the
|
||||
vCenter version is 6.0 or above and the extra spec key ``vmware:clone_type``
|
||||
in the destination volume type is set to ``linked``.
|
||||
|
||||
- Backup a volume.
|
||||
|
||||
.. note::
|
||||
|
||||
This operation creates a backup of the volume in ``streamOptimized``
|
||||
disk format.
|
||||
|
||||
- Restore backup to new or existing volume.
|
||||
|
||||
.. note::
|
||||
|
||||
Supported only if the existing volume doesn't contain snapshots.
|
||||
|
||||
- Change the type of a volume.
|
||||
|
||||
.. note::
|
||||
|
||||
This operation is supported only if the volume state is ``available``.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
|
||||
.. _vmware-spbm:
|
||||
|
||||
Storage policy-based configuration in vCenter
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter
|
||||
data stores supporting the Compute, Image service, and Block Storage components
|
||||
of an OpenStack implementation.
|
||||
|
||||
In a vSphere OpenStack deployment, SPBM enables you to delegate several data
|
||||
stores for storage, which reduces the risk of running out of storage space. The
|
||||
policy logic selects the data store based on accessibility and available
|
||||
storage space.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
- Determine the data stores to be used by the SPBM policy.
|
||||
|
||||
- Determine the tag that identifies the data stores in the OpenStack component
|
||||
configuration.
|
||||
|
||||
- Create separate policies or sets of data stores for separate
|
||||
OpenStack components.
|
||||
|
||||
Create storage policies in vCenter
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. In vCenter, create the tag that identifies the data stores:
|
||||
|
||||
#. From the :guilabel:`Home` screen, click :guilabel:`Tags`.
|
||||
|
||||
#. Specify a name for the tag.
|
||||
|
||||
#. Specify a tag category. For example, ``spbm-cinder``.
|
||||
|
||||
#. Apply the tag to the data stores to be used by the SPBM policy.
|
||||
|
||||
.. note::
|
||||
|
||||
For details about creating tags in vSphere, see the `vSphere
|
||||
documentation
|
||||
<http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-379F40D3-8CD6-449E-89CB-79C4E2683221.html>`__.
|
||||
|
||||
#. In vCenter, create a tag-based storage policy that uses one or more tags to
|
||||
identify a set of data stores.
|
||||
|
||||
.. note::
|
||||
|
||||
For details about creating storage policies in vSphere, see the `vSphere
|
||||
documentation
|
||||
<http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.storage.doc/GUID-89091D59-D844-46B2-94C2-35A3961D23E7.html>`__.
|
||||
|
||||
Data store selection
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If storage policy is enabled, the driver initially selects all the data stores
|
||||
that match the associated storage policy.
|
||||
|
||||
If two or more data stores match the storage policy, the driver chooses a data
|
||||
store that is connected to the maximum number of hosts.
|
||||
|
||||
In case of ties, the driver chooses the data store with lowest space
|
||||
utilization, where space utilization is defined by the
|
||||
``(1-freespace/totalspace)`` meters.
|
||||
|
||||
These actions reduce the number of volume migrations while attaching the volume
|
||||
to instances.
|
||||
|
||||
The volume must be migrated if the ESX host for the instance cannot access the
|
||||
data store that contains the volume.
|
@ -0,0 +1,14 @@
|
||||
========================
|
||||
Virtuozzo Storage driver
|
||||
========================
|
||||
|
||||
The Virtuozzo Storage driver is a fault-tolerant distributed storage
|
||||
system that is optimized for virtualization workloads.
|
||||
Set the following in your ``cinder.conf`` file, and use the following
|
||||
options to configure it.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
|
||||
|
||||
.. include:: ../../tables/cinder-vzstorage.rst
|
@ -0,0 +1,122 @@
|
||||
===========================
|
||||
Windows iSCSI volume driver
|
||||
===========================
|
||||
|
||||
Windows Server 2012 and Windows Storage Server 2012 offer an integrated iSCSI
|
||||
Target service that can be used with OpenStack Block Storage in your stack.
|
||||
Being entirely a software solution, consider it in particular for mid-sized
|
||||
networks where the costs of a SAN might be excessive.
|
||||
|
||||
The Windows Block Storage driver works with OpenStack Compute on any
|
||||
hypervisor. It includes snapshotting support and the ``boot from volume``
|
||||
feature.
|
||||
|
||||
This driver creates volumes backed by fixed-type VHD images on Windows Server
|
||||
2012 and dynamic-type VHDX on Windows Server 2012 R2, stored locally on a
|
||||
user-specified path. The system uses those images as iSCSI disks and exports
|
||||
them through iSCSI targets. Each volume has its own iSCSI target.
|
||||
|
||||
This driver has been tested with Windows Server 2012 and Windows Server R2
|
||||
using the Server and Storage Server distributions.
|
||||
|
||||
Install the ``cinder-volume`` service as well as the required Python components
|
||||
directly onto the Windows node.
|
||||
|
||||
You may install and configure ``cinder-volume`` and its dependencies manually
|
||||
using the following guide or you may use the ``Cinder Volume Installer``,
|
||||
presented below.
|
||||
|
||||
Installing using the OpenStack cinder volume installer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In case you want to avoid all the manual setup, you can use Cloudbase
|
||||
Solutions' installer. You can find it at
|
||||
https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi. It installs an
|
||||
independent Python environment, in order to avoid conflicts with existing
|
||||
applications, dynamically generates a ``cinder.conf`` file based on the
|
||||
parameters provided by you.
|
||||
|
||||
``cinder-volume`` will be configured to run as a Windows Service, which can
|
||||
be restarted using:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
PS C:\> net stop cinder-volume ; net start cinder-volume
|
||||
|
||||
The installer can also be used in unattended mode. More details about how to
|
||||
use the installer and its features can be found at https://www.cloudbase.it.
|
||||
|
||||
Windows Server configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The required service in order to run ``cinder-volume`` on Windows is
|
||||
``wintarget``. This will require the iSCSI Target Server Windows feature
|
||||
to be installed. You can install it by running the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
PS C:\> Add-WindowsFeature
|
||||
FS-iSCSITarget-ServerAdd-WindowsFeatureFS-iSCSITarget-Server
|
||||
|
||||
.. note::
|
||||
|
||||
The Windows Server installation requires at least 16 GB of disk space. The
|
||||
volumes hosted by this node need the extra space.
|
||||
|
||||
For ``cinder-volume`` to work properly, you must configure NTP as explained
|
||||
in :ref:`configure-ntp-windows`.
|
||||
|
||||
Next, install the requirements as described in :ref:`windows-requirements`.
|
||||
|
||||
Getting the code
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Git can be used to download the necessary source code. The installer to run Git
|
||||
on Windows can be downloaded here:
|
||||
|
||||
https://git-for-windows.github.io/
|
||||
|
||||
Once installed, run the following to clone the OpenStack Block Storage code:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
PS C:\> git.exe clone https://git.openstack.org/openstack/cinder
|
||||
|
||||
Configure cinder-volume
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cinder.conf`` file may be placed in ``C:\etc\cinder``. Below is a
|
||||
configuration sample for using the Windows iSCSI Driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
auth_strategy = keystone
|
||||
volume_name_template = volume-%s
|
||||
volume_driver = cinder.volume.drivers.windows.WindowsDriver
|
||||
glance_api_servers = IP_ADDRESS:9292
|
||||
rabbit_host = IP_ADDRESS
|
||||
rabbit_port = 5672
|
||||
sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder
|
||||
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
|
||||
rabbit_password = Passw0rd
|
||||
logdir = C:\OpenStack\Log\
|
||||
image_conversion_dir = C:\ImageConversionDir
|
||||
debug = True
|
||||
|
||||
The following table contains a reference to the only driver specific
|
||||
option that will be used by the Block Storage Windows driver:
|
||||
|
||||
.. include:: ../../tables/cinder-windows.rst
|
||||
|
||||
Run cinder-volume
|
||||
-----------------
|
||||
|
||||
After configuring ``cinder-volume`` using the ``cinder.conf`` file, you may
|
||||
use the following commands to install and run the service (note that you
|
||||
must replace the variables with the proper paths):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
PS C:\> python $CinderClonePath\setup.py install
|
||||
PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath
|
@ -0,0 +1,122 @@
|
||||
==================
|
||||
X-IO volume driver
|
||||
==================
|
||||
|
||||
The X-IO volume driver for OpenStack Block Storage enables ISE products to be
|
||||
managed by OpenStack Block Storage nodes. This driver can be configured to work
|
||||
with iSCSI and Fibre Channel storage protocols. The X-IO volume driver allows
|
||||
the cloud operator to take advantage of ISE features like quality of
|
||||
service (QoS) and Continuous Adaptive Data Placement (CADP). It also supports
|
||||
creating thin volumes and specifying volume media affinity.
|
||||
|
||||
Requirements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
ISE FW 2.8.0 or ISE FW 3.1.0 is required for OpenStack Block Storage
|
||||
support. The X-IO volume driver will not work with older ISE FW.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, detach, retype, clone, and extend volumes.
|
||||
- Create a volume from snapshot.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Manage and unmanage a volume.
|
||||
- Get volume statistics.
|
||||
- Create a thin provisioned volume.
|
||||
- Create volumes with QoS specifications.
|
||||
|
||||
Configure X-IO Volume driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To configure the use of an ISE product with OpenStack Block Storage, modify
|
||||
your ``cinder.conf`` file as follows. Be careful to use the one that matches
|
||||
the storage protocol in use:
|
||||
|
||||
Fibre Channel
|
||||
-------------
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver
|
||||
san_ip = 1.2.3.4 # the address of your ISE REST management interface
|
||||
san_login = administrator # your ISE management admin login
|
||||
san_password = password # your ISE management admin password
|
||||
|
||||
iSCSI
|
||||
-----
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver
|
||||
san_ip = 1.2.3.4 # the address of your ISE REST management interface
|
||||
san_login = administrator # your ISE management admin login
|
||||
san_password = password # your ISE management admin password
|
||||
iscsi_ip_address = ionet_ip # ip address to one ISE port connected to the IONET
|
||||
|
||||
Optional configuration parameters
|
||||
---------------------------------
|
||||
|
||||
.. include:: ../../tables/cinder-xio.rst
|
||||
|
||||
Multipath
|
||||
---------
|
||||
|
||||
The X-IO ISE supports a multipath configuration, but multipath must be enabled
|
||||
on the compute node (see *ISE Storage Blade Best Practices Guide*).
|
||||
For more information, see `X-IO Document Library
|
||||
<http://xiostorage.com/document_library/>`__.
|
||||
|
||||
Volume types
|
||||
------------
|
||||
|
||||
OpenStack Block Storage uses volume types to help the administrator specify
|
||||
attributes for volumes. These attributes are called extra-specs. The X-IO
|
||||
volume driver support the following extra-specs.
|
||||
|
||||
.. list-table:: Extra specs
|
||||
:header-rows: 1
|
||||
|
||||
* - Extra-specs name
|
||||
- Valid values
|
||||
- Description
|
||||
* - ``Feature:Raid``
|
||||
- 1, 5
|
||||
- RAID level for volume.
|
||||
* - ``Feature:Pool``
|
||||
- 1 - n (n being number of pools on ISE)
|
||||
- Pool to create volume in.
|
||||
* - ``Affinity:Type``
|
||||
- cadp, flash, hdd
|
||||
- Volume media affinity type.
|
||||
* - ``Alloc:Type``
|
||||
- 0 (thick), 1 (thin)
|
||||
- Allocation type for volume. Thick or thin.
|
||||
* - ``QoS:minIOPS``
|
||||
- n (value less than maxIOPS)
|
||||
- Minimum IOPS setting for volume.
|
||||
* - ``QoS:maxIOPS``
|
||||
- n (value bigger than minIOPS)
|
||||
- Maximum IOPS setting for volume.
|
||||
* - ``QoS:burstIOPS``
|
||||
- n (value bigger than minIOPS)
|
||||
- Burst IOPS setting for volume.
|
||||
|
||||
Examples
|
||||
--------
|
||||
|
||||
Create a volume type called xio1-flash for volumes that should reside on ssd
|
||||
storage:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create xio1-flash
|
||||
$ openstack volume type set --property Affinity:Type=flash xio1-flash
|
||||
|
||||
Create a volume type called xio1 and set QoS min and max:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create xio1
|
||||
$ openstack volume type set --property QoS:minIOPS=20 xio1
|
||||
$ openstack volume type set --property QoS:maxIOPS=5000 xio1
|
@ -0,0 +1,80 @@
|
||||
=================================
|
||||
Zadara Storage VPSA volume driver
|
||||
=================================
|
||||
|
||||
Zadara Storage, Virtual Private Storage Array (VPSA) is the first software
|
||||
defined, Enterprise-Storage-as-a-Service. It is an elastic and private block
|
||||
and file storage system which, provides enterprise-grade data protection and
|
||||
data management storage services.
|
||||
|
||||
The ``ZadaraVPSAISCSIDriver`` volume driver allows the Zadara Storage VPSA
|
||||
to be used as a volume back end storage in OpenStack deployments.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use Zadara Storage VPSA Volume Driver you will require:
|
||||
|
||||
- Zadara Storage VPSA version 15.07 and above
|
||||
|
||||
- iSCSI or iSER host interfaces
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes
|
||||
- Create, list, and delete volume snapshots
|
||||
- Create a volume from a snapshot
|
||||
- Copy an image to a volume
|
||||
- Copy a volume to an image
|
||||
- Clone a volume
|
||||
- Extend a volume
|
||||
- Migrate a volume with back end assistance
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Create a VPSA pool(s) or make sure you have an existing pool(s) that will
|
||||
be used for volume services. The VPSA pool(s) will be identified by its ID
|
||||
(pool-xxxxxxxx). For further details, see the
|
||||
`VPSA's user guide <http://tinyurl.com/hxo3tt5>`_.
|
||||
|
||||
#. Adjust the ``cinder.conf`` configuration file to define the volume driver
|
||||
name along with a storage back end entry for each VPSA pool that will be
|
||||
managed by the block storage service.
|
||||
Each back end entry requires a unique section name, surrounded by square
|
||||
brackets (or parentheses), followed by options in ``key=value`` format.
|
||||
|
||||
.. note::
|
||||
|
||||
Restart cinder-volume service after modifying ``cinder.conf``.
|
||||
|
||||
|
||||
Sample minimum back end configuration
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = vpsa
|
||||
|
||||
[vpsa]
|
||||
zadara_vpsa_host = 172.31.250.10
|
||||
zadara_vpsa_port = 80
|
||||
zadara_user = vpsauser
|
||||
zadara_password = mysecretpassword
|
||||
zadara_use_iser = false
|
||||
zadara_vpsa_poolname = pool-00000001
|
||||
volume_driver = cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
|
||||
volume_backend_name = vpsa
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section contains the configuration options that are specific
|
||||
to the Zadara Storage VPSA driver.
|
||||
|
||||
.. include:: ../../tables/cinder-zadara.rst
|
||||
|
||||
.. note::
|
||||
|
||||
By design, all volumes created within the VPSA are thin provisioned.
|
@ -0,0 +1,265 @@
|
||||
=========================================
|
||||
Oracle ZFS Storage Appliance iSCSI driver
|
||||
=========================================
|
||||
|
||||
Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to
|
||||
protect data, speed tuning and troubleshooting, and deliver high
|
||||
performance and high availability. Through the Oracle ZFSSA iSCSI
|
||||
Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block
|
||||
storage resource. The driver enables you to create iSCSI volumes that an
|
||||
OpenStack Block Storage server can allocate to any virtual machine
|
||||
running on a compute host.
|
||||
|
||||
Requirements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Oracle ZFSSA iSCSI Driver, version ``1.0.0`` and later, supports
|
||||
ZFSSA software release ``2013.1.2.0`` and later.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, detach, manage, and unmanage volumes.
|
||||
- Create and delete snapshots.
|
||||
- Create volume from snapshot.
|
||||
- Extend a volume.
|
||||
- Attach and detach volumes.
|
||||
- Get volume stats.
|
||||
- Clone volumes.
|
||||
- Migrate a volume.
|
||||
- Local cache of a bootable volume.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
#. Enable RESTful service on the ZFSSA Storage Appliance.
|
||||
|
||||
#. Create a new user on the appliance with the following authorizations:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
scope=stmf - allow_configure=true
|
||||
scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true
|
||||
scope=schema - allow_modify=true
|
||||
|
||||
You can create a role with authorizations as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration roles
|
||||
zfssa:configuration roles> role OpenStackRole
|
||||
zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack Cinder Driver"
|
||||
zfssa:configuration roles OpenStackRole (uncommitted)> commit
|
||||
zfssa:configuration roles> select OpenStackRole
|
||||
zfssa:configuration roles OpenStackRole> authorizations create
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=stmf
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
|
||||
zfssa:configuration roles OpenStackRole> authorizations create
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeGeneralProps=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
|
||||
|
||||
You can create a user with a specific role as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration users
|
||||
zfssa:configuration users> user cinder
|
||||
zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
|
||||
zfssa:configuration users cinder (uncommitted)> set initial_password=12345
|
||||
zfssa:configuration users cinder (uncommitted)> commit
|
||||
zfssa:configuration users> select cinder set roles=OpenStackRole
|
||||
|
||||
.. note::
|
||||
|
||||
You can also run this `workflow
|
||||
<https://openstackci.oracle.com/openstack_docs/zfssa_cinder_workflow.akwf>`__
|
||||
to automate the above tasks.
|
||||
Refer to `Oracle documentation
|
||||
<https://docs.oracle.com/cd/E37831_01/html/E52872/godgw.html>`__
|
||||
on how to download, view, and execute a workflow.
|
||||
|
||||
#. Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is
|
||||
not online, enable the service by using the BUI, CLI or REST API in the
|
||||
appliance.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration services iscsi
|
||||
zfssa:configuration services iscsi> enable
|
||||
zfssa:configuration services iscsi> show
|
||||
Properties:
|
||||
<status>= online
|
||||
...
|
||||
|
||||
Define the following required properties in the ``cinder.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
|
||||
san_ip = myhost
|
||||
san_login = username
|
||||
san_password = password
|
||||
zfssa_pool = mypool
|
||||
zfssa_project = myproject
|
||||
zfssa_initiator_group = default
|
||||
zfssa_target_portal = w.x.y.z:3260
|
||||
zfssa_target_interfaces = e1000g0
|
||||
|
||||
Optionally, you can define additional properties.
|
||||
|
||||
Target interfaces can be seen as follows in the CLI:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration net interfaces
|
||||
zfssa:configuration net interfaces> show
|
||||
Interfaces:
|
||||
INTERFACE STATE CLASS LINKS ADDRS LABEL
|
||||
e1000g0 up ip e1000g0 1.10.20.30/24 Untitled Interface
|
||||
...
|
||||
|
||||
.. note::
|
||||
|
||||
Do not use management interfaces for ``zfssa_target_interfaces``.
|
||||
|
||||
#. Configure the cluster:
|
||||
|
||||
If a cluster is used as the cinder storage resource, the following
|
||||
verifications are required on your Oracle ZFS Storage Appliance:
|
||||
|
||||
- Verify that both the pool and the network interface are of type
|
||||
singleton and are not locked to the current controller. This
|
||||
approach ensures that the pool and the interface used for data
|
||||
always belong to the active controller, regardless of the current
|
||||
state of the cluster.
|
||||
|
||||
- Verify that the management IP, data IP and storage pool belong to
|
||||
the same head.
|
||||
|
||||
.. note::
|
||||
|
||||
Most configuration settings, including service properties, users, roles,
|
||||
and iSCSI initiator definitions are replicated on both heads
|
||||
automatically. If the driver modifies any of these settings, they will be
|
||||
modified automatically on both heads.
|
||||
|
||||
.. note::
|
||||
|
||||
A short service interruption occurs during failback or takeover,
|
||||
but once the process is complete, the ``cinder-volume`` service should be able
|
||||
to access the pool through the data IP.
|
||||
|
||||
ZFSSA assisted volume migration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ZFSSA iSCSI driver supports storage assisted volume migration
|
||||
starting in the Liberty release. This feature uses remote replication
|
||||
feature on the ZFSSA. Volumes can be migrated between two backends
|
||||
configured not only to the same ZFSSA but also between two separate
|
||||
ZFSSAs altogether.
|
||||
|
||||
The following conditions must be met in order to use ZFSSA assisted
|
||||
volume migration:
|
||||
|
||||
- Both the source and target backends are configured to ZFSSAs.
|
||||
|
||||
- Remote replication service on the source and target appliance is enabled.
|
||||
|
||||
- The ZFSSA to which the target backend is configured should be configured as a
|
||||
target in the remote replication service of the ZFSSA configured to the
|
||||
source backend. The remote replication target needs to be configured even
|
||||
when the source and the destination for volume migration are the same ZFSSA.
|
||||
Define ``zfssa_replication_ip`` in the ``cinder.conf`` file of the source
|
||||
backend as the IP address used to register the target ZFSSA in the remote
|
||||
replication service of the source ZFSSA.
|
||||
|
||||
- The name of the iSCSI target group(``zfssa_target_group``) on the source and
|
||||
the destination ZFSSA is the same.
|
||||
|
||||
- The volume is not attached and is in available state.
|
||||
|
||||
If any of the above conditions are not met, the driver will proceed with
|
||||
generic volume migration.
|
||||
|
||||
The ZFSSA user on the source and target appliances will need to have
|
||||
additional role authorizations for assisted volume migration to work. In
|
||||
scope nas, set ``allow_rrtarget`` and ``allow_rrsource`` to ``true``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrtarget=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrsource=true
|
||||
|
||||
ZFSSA local cache
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The local cache feature enables ZFSSA drivers to serve the usage of bootable
|
||||
volumes significantly better. With the feature, the first bootable volume
|
||||
created from an image is cached, so that subsequent volumes can be created
|
||||
directly from the cache, instead of having image data transferred over the
|
||||
network multiple times.
|
||||
|
||||
The following conditions must be met in order to use ZFSSA local cache feature:
|
||||
|
||||
- A storage pool needs to be configured.
|
||||
|
||||
- REST and iSCSI services need to be turned on.
|
||||
|
||||
- On an OpenStack controller, ``cinder.conf`` needs to contain necessary
|
||||
properties used to configure and set up the ZFSSA iSCSI driver, including the
|
||||
following new properties:
|
||||
|
||||
- ``zfssa_enable_local_cache``: (True/False) To enable/disable the feature.
|
||||
|
||||
- ``zfssa_cache_project``: The ZFSSA project name where cache volumes are
|
||||
stored.
|
||||
|
||||
Every cache volume has two additional properties stored as ZFSSA custom
|
||||
schema. It is important that the schema are not altered outside of Block
|
||||
Storage when the driver is in use:
|
||||
|
||||
- ``image_id``: stores the image id as in Image service.
|
||||
|
||||
- ``updated_at``: stores the most current timestamp when the image is updated
|
||||
in Image service.
|
||||
|
||||
Supported extra specs
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Extra specs provide the OpenStack storage admin the flexibility to create
|
||||
volumes with different characteristics from the ones specified in the
|
||||
``cinder.conf`` file. The admin will specify the volume properties as keys
|
||||
at volume type creation. When a user requests a volume of this volume type,
|
||||
the volume will be created with the properties specified as extra specs.
|
||||
|
||||
The following extra specs scoped keys are supported by the driver:
|
||||
|
||||
- ``zfssa:volblocksize``
|
||||
|
||||
- ``zfssa:sparse``
|
||||
|
||||
- ``zfssa:compression``
|
||||
|
||||
- ``zfssa:logbias``
|
||||
|
||||
Volume types can be created using the :command:`openstack volume type create`
|
||||
command.
|
||||
Extra spec keys can be added using :command:`openstack volume type set`
|
||||
command.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The Oracle ZFSSA iSCSI Driver supports these options:
|
||||
|
||||
.. include:: ../../tables/cinder-zfssa-iscsi.rst
|
@ -0,0 +1,297 @@
|
||||
=======================================
|
||||
Oracle ZFS Storage Appliance NFS driver
|
||||
=======================================
|
||||
|
||||
The Oracle ZFS Storage Appliance (ZFSSA) NFS driver enables the ZFSSA to
|
||||
be used seamlessly as a block storage resource. The driver enables you
|
||||
to to create volumes on a ZFS share that is NFS mounted.
|
||||
|
||||
Requirements
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Oracle ZFS Storage Appliance Software version ``2013.1.2.0`` or later.
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, detach, manage, and unmanage volumes.
|
||||
|
||||
- Create and delete snapshots.
|
||||
|
||||
- Create a volume from a snapshot.
|
||||
|
||||
- Extend a volume.
|
||||
|
||||
- Copy an image to a volume.
|
||||
|
||||
- Copy a volume to an image.
|
||||
|
||||
- Clone a volume.
|
||||
|
||||
- Volume migration.
|
||||
|
||||
- Local cache of a bootable volume
|
||||
|
||||
Appliance configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Appliance configuration using the command-line interface (CLI) is
|
||||
described below. To access the CLI, ensure SSH remote access is enabled,
|
||||
which is the default. You can also perform configuration using the
|
||||
browser user interface (BUI) or the RESTful API. Please refer to the
|
||||
`Oracle ZFS Storage Appliance
|
||||
documentation <http://www.oracle.com/technetwork/documentation/oracle-unified-ss-193371.html>`__
|
||||
for details on how to configure the Oracle ZFS Storage Appliance using
|
||||
the BUI, CLI, and RESTful API.
|
||||
|
||||
#. Log in to the Oracle ZFS Storage Appliance CLI and enable the REST
|
||||
service. REST service needs to stay online for this driver to function.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:>configuration services rest enable
|
||||
|
||||
#. Create a new storage pool on the appliance if you do not want to use an
|
||||
existing one. This storage pool is named ``'mypool'`` for the sake of this
|
||||
documentation.
|
||||
|
||||
#. Create a new project and share in the storage pool (``mypool``) if you do
|
||||
not want to use existing ones. This driver will create a project and share
|
||||
by the names specified in the ``cinder.conf`` file, if a project and share
|
||||
by that name does not already exist in the storage pool (``mypool``).
|
||||
The project and share are named ``NFSProject`` and ``nfs_share``' in the
|
||||
sample ``cinder.conf`` file as entries below.
|
||||
|
||||
#. To perform driver operations, create a role with the following
|
||||
authorizations:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
scope=svc - allow_administer=true, allow_restart=true, allow_configure=true
|
||||
scope=nas - pool=pool_name, project=project_name, share=share_name, allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true, allow_changeAccessProps=true, allow_changeProtocolProps=true
|
||||
|
||||
The following examples show how to create a role with authorizations.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration roles
|
||||
zfssa:configuration roles> role OpenStackRole
|
||||
zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack NFS Cinder Driver"
|
||||
zfssa:configuration roles OpenStackRole (uncommitted)> commit
|
||||
zfssa:configuration roles> select OpenStackRole
|
||||
zfssa:configuration roles OpenStackRole> authorizations create
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=svc
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_administer=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_restart=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration roles OpenStackRole authorizations> set scope=nas
|
||||
|
||||
The following properties need to be set when the scope of this role needs to
|
||||
be limited to a pool (``mypool``), a project (``NFSProject``) and a share
|
||||
(``nfs_share``) created in the steps above. This will prevent the user
|
||||
assigned to this role from being used to modify other pools, projects and
|
||||
shares.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set pool=mypool
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set project=NFSProject
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set share=nfs_share
|
||||
|
||||
#. The following properties only need to be set when a share and project has
|
||||
not been created following the steps above and wish to allow the driver to
|
||||
create them for you.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeAccessProps=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeProtocolProps=true
|
||||
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
|
||||
|
||||
#. Create a new user or modify an existing one and assign the new role to
|
||||
the user.
|
||||
|
||||
The following example shows how to create a new user and assign the new
|
||||
role to the user.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration users
|
||||
zfssa:configuration users> user cinder
|
||||
zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
|
||||
zfssa:configuration users cinder (uncommitted)> set initial_password=12345
|
||||
zfssa:configuration users cinder (uncommitted)> commit
|
||||
zfssa:configuration users> select cinder set roles=OpenStackRole
|
||||
|
||||
#. Ensure that NFS and HTTP services on the appliance are online. Note the
|
||||
HTTPS port number for later entry in the cinder service configuration file
|
||||
(``cinder.conf``). This driver uses WebDAV over HTTPS to create snapshots
|
||||
and clones of volumes, and therefore needs to have the HTTP service online.
|
||||
|
||||
The following example illustrates enabling the services and showing their
|
||||
properties.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration services nfs
|
||||
zfssa:configuration services nfs> enable
|
||||
zfssa:configuration services nfs> show
|
||||
Properties:
|
||||
<status>= online
|
||||
...
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:configuration services http> enable
|
||||
zfssa:configuration services http> show
|
||||
Properties:
|
||||
<status>= online
|
||||
require_login = true
|
||||
protocols = http/https
|
||||
listen_port = 80
|
||||
https_port = 443
|
||||
|
||||
.. note::
|
||||
|
||||
You can also run this `workflow
|
||||
<https://openstackci.oracle.com/openstack_docs/zfssa_cinder_workflow.akwf>`__
|
||||
to automate the above tasks.
|
||||
Refer to `Oracle documentation
|
||||
<https://docs.oracle.com/cd/E37831_01/html/E52872/godgw.html>`__
|
||||
on how to download, view, and execute a workflow.
|
||||
|
||||
#. Create a network interface to be used exclusively for data. An existing
|
||||
network interface may also be used. The following example illustrates how to
|
||||
make a network interface for data traffic flow only.
|
||||
|
||||
.. note::
|
||||
|
||||
For better performance and reliability, it is recommended to configure a
|
||||
separate subnet exclusively for data traffic in your cloud environment.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration net interfaces
|
||||
zfssa:configuration net interfaces> select igbx
|
||||
zfssa:configuration net interfaces igbx> set admin=false
|
||||
zfssa:configuration net interfaces igbx> commit
|
||||
|
||||
#. For clustered controller systems, the following verification is required in
|
||||
addition to the above steps. Skip this step if a standalone system is used.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
zfssa:> configuration cluster resources list
|
||||
|
||||
Verify that both the newly created pool and the network interface are of
|
||||
type ``singleton`` and are not locked to the current controller. This
|
||||
approach ensures that the pool and the interface used for data always belong
|
||||
to the active controller, regardless of the current state of the cluster.
|
||||
Verify that both the network interface used for management and data, and the
|
||||
storage pool belong to the same head.
|
||||
|
||||
.. note::
|
||||
|
||||
There will be a short service interruption during failback/takeover, but
|
||||
once the process is complete, the driver should be able to access the
|
||||
ZFSSA for data as well as for management.
|
||||
|
||||
Cinder service configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Define the following required properties in the ``cinder.conf``
|
||||
configuration file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.zfssa.zfssanfs.ZFSSANFSDriver
|
||||
san_ip = myhost
|
||||
san_login = username
|
||||
san_password = password
|
||||
zfssa_data_ip = mydata
|
||||
zfssa_nfs_pool = mypool
|
||||
|
||||
.. note::
|
||||
|
||||
Management interface ``san_ip`` can be used instead of ``zfssa_data_ip``,
|
||||
but it is not recommended.
|
||||
|
||||
#. You can also define the following additional properties in the
|
||||
``cinder.conf`` configuration file:
|
||||
|
||||
.. code:: ini
|
||||
|
||||
zfssa_nfs_project = NFSProject
|
||||
zfssa_nfs_share = nfs_share
|
||||
zfssa_nfs_mount_options =
|
||||
zfssa_nfs_share_compression = off
|
||||
zfssa_nfs_share_logbias = latency
|
||||
zfssa_https_port = 443
|
||||
|
||||
.. note::
|
||||
|
||||
The driver does not use the file specified in the ``nfs_shares_config``
|
||||
option.
|
||||
|
||||
ZFSSA local cache
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The local cache feature enables ZFSSA drivers to serve the usage of
|
||||
bootable volumes significantly better. With the feature, the first
|
||||
bootable volume created from an image is cached, so that subsequent
|
||||
volumes can be created directly from the cache, instead of having image
|
||||
data transferred over the network multiple times.
|
||||
|
||||
The following conditions must be met in order to use ZFSSA local cache
|
||||
feature:
|
||||
|
||||
- A storage pool needs to be configured.
|
||||
|
||||
- REST and NFS services need to be turned on.
|
||||
|
||||
- On an OpenStack controller, ``cinder.conf`` needs to contain
|
||||
necessary properties used to configure and set up the ZFSSA NFS
|
||||
driver, including the following new properties:
|
||||
|
||||
zfssa_enable_local_cache
|
||||
(True/False) To enable/disable the feature.
|
||||
|
||||
zfssa_cache_directory
|
||||
The directory name inside zfssa_nfs_share where cache volumes
|
||||
are stored.
|
||||
|
||||
Every cache volume has two additional properties stored as WebDAV
|
||||
properties. It is important that they are not altered outside of Block
|
||||
Storage when the driver is in use:
|
||||
|
||||
image_id
|
||||
stores the image id as in Image service.
|
||||
|
||||
updated_at
|
||||
stores the most current timestamp when the image is
|
||||
updated in Image service.
|
||||
|
||||
Driver options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The Oracle ZFS Storage Appliance NFS driver supports these options:
|
||||
|
||||
.. include:: ../../tables/cinder-zfssa-nfs.rst
|
||||
|
||||
This driver shares additional NFS configuration options with the generic
|
||||
NFS driver. For a description of these, see :ref:`cinder-storage_nfs`.
|
@ -0,0 +1,158 @@
|
||||
==================
|
||||
ZTE volume drivers
|
||||
==================
|
||||
|
||||
The ZTE volume drivers allow ZTE KS3200 or KU5200 arrays
|
||||
to be used for Block Storage in OpenStack deployments.
|
||||
|
||||
System requirements
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the ZTE drivers, the following prerequisites:
|
||||
|
||||
- ZTE KS3200 or KU5200 array with:
|
||||
|
||||
- iSCSI or FC interfaces
|
||||
- 30B2 firmware or later
|
||||
|
||||
- Network connectivity between the OpenStack host and the array
|
||||
management interfaces
|
||||
|
||||
- HTTPS or HTTP must be enabled on the array
|
||||
|
||||
Supported operations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Create, delete, attach, and detach volumes.
|
||||
- Create, list, and delete volume snapshots.
|
||||
- Create a volume from a snapshot.
|
||||
- Copy an image to a volume.
|
||||
- Copy a volume to an image.
|
||||
- Clone a volume.
|
||||
- Extend a volume.
|
||||
- Migrate a volume with back-end assistance.
|
||||
- Retype a volume.
|
||||
- Manage and unmanage a volume.
|
||||
|
||||
Configuring the array
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Verify that the array can be managed using an HTTPS connection. HTTP can
|
||||
also be used if ``zte_api_protocol=http`` is placed into the
|
||||
appropriate sections of the ``cinder.conf`` file.
|
||||
|
||||
Confirm that virtual pools A and B are present if you plan to use
|
||||
virtual pools for OpenStack storage.
|
||||
|
||||
#. Edit the ``cinder.conf`` file to define a storage back-end entry for
|
||||
each storage pool on the array that will be managed by OpenStack. Each
|
||||
entry consists of a unique section name, surrounded by square brackets,
|
||||
followed by options specified in ``key=value`` format.
|
||||
|
||||
- The ``zte_backend_name`` value specifies the name of the storage
|
||||
pool on the array.
|
||||
|
||||
- The ``volume_backend_name`` option value can be a unique value, if
|
||||
you wish to be able to assign volumes to a specific storage pool on
|
||||
the array, or a name that is shared among multiple storage pools to
|
||||
let the volume scheduler choose where new volumes are allocated.
|
||||
|
||||
- The rest of the options will be repeated for each storage pool in a
|
||||
given array: the appropriate cinder driver name, IP address or
|
||||
host name of the array management interface; the username and password
|
||||
of an array user account with ``manage`` privileges; and the iSCSI IP
|
||||
addresses for the array if using the iSCSI transport protocol.
|
||||
|
||||
In the examples below, two back ends are defined, one for pool A and one
|
||||
for pool B, and a common ``volume_backend_name``. Use this for a
|
||||
single volume type definition can be used to allocate volumes from both
|
||||
pools.
|
||||
|
||||
**Example: iSCSI back-end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
zte_backend_name = A
|
||||
volume_backend_name = zte-array
|
||||
volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
zte_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
[pool-b]
|
||||
zte_backend_name = B
|
||||
volume_backend_name = zte-array
|
||||
volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
zte_iscsi_ips = 10.2.3.4,10.2.3.5
|
||||
|
||||
**Example: Fibre Channel back end entries**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pool-a]
|
||||
zte_backend_name = A
|
||||
volume_backend_name = zte-array
|
||||
volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
[pool-b]
|
||||
zte_backend_name = B
|
||||
volume_backend_name = zte-array
|
||||
volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
|
||||
san_ip = 10.1.2.3
|
||||
san_login = manage
|
||||
san_password = !manage
|
||||
|
||||
#. If HTTPS is not enabled in the array, include
|
||||
``zte_api_protocol = http`` in each of the back-end definitions.
|
||||
|
||||
#. If HTTPS is enabled, you can enable certificate verification with the
|
||||
option ``zte_verify_certificate=True``. You may also use the
|
||||
``zte_verify_certificate_path`` parameter to specify the path to a
|
||||
``CA_BUNDLE`` file containing CAs other than those in the default list.
|
||||
|
||||
#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an
|
||||
``enabled_backends`` parameter specifying the back-end entries you added,
|
||||
and a ``default_volume_type`` parameter specifying the name of a volume
|
||||
type that you will create in the next step.
|
||||
|
||||
**Example: [DEFAULT] section changes**
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = pool-a,pool-b
|
||||
default_volume_type = zte
|
||||
|
||||
#. Create a new volume type for each distinct ``volume_backend_name`` value
|
||||
that you added to the ``cinder.conf`` file. The example below
|
||||
assumes that the same ``volume_backend_name=zte-array``
|
||||
option was specified in all of the
|
||||
entries, and specifies that the volume type ``zte`` can be used to
|
||||
allocate volumes from any of them.
|
||||
|
||||
**Example: Creating a volume type**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create zte
|
||||
$ openstack volume type set --property volume_backend_name=zte-array zte
|
||||
|
||||
#. After modifying the ``cinder.conf`` file,
|
||||
restart the ``cinder-volume`` service.
|
||||
|
||||
Driver-specific options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following table contains the configuration options that are specific
|
||||
to the ZTE drivers.
|
||||
|
||||
.. include:: ../../tables/cinder-zte.rst
|
126
doc/source/config-reference/block-storage/fc-zoning.rst
Normal file
126
doc/source/config-reference/block-storage/fc-zoning.rst
Normal file
@ -0,0 +1,126 @@
|
||||
|
||||
.. _fc_zone_manager:
|
||||
|
||||
==========================
|
||||
Fibre Channel Zone Manager
|
||||
==========================
|
||||
|
||||
The Fibre Channel Zone Manager allows FC SAN Zone/Access control
|
||||
management in conjunction with Fibre Channel block storage. The
|
||||
configuration of Fibre Channel Zone Manager and various zone drivers are
|
||||
described in this section.
|
||||
|
||||
Configure Block Storage to use Fibre Channel Zone Manager
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If Block Storage is configured to use a Fibre Channel volume driver that
|
||||
supports Zone Manager, update ``cinder.conf`` to add the following
|
||||
configuration options to enable Fibre Channel Zone Manager.
|
||||
|
||||
Make the following changes in the ``/etc/cinder/cinder.conf`` file.
|
||||
|
||||
.. include:: ../tables/cinder-zoning.rst
|
||||
|
||||
To use different Fibre Channel Zone Drivers, use the parameters
|
||||
described in this section.
|
||||
|
||||
.. note::
|
||||
|
||||
When multi backend configuration is used, provide the
|
||||
``zoning_mode`` configuration option as part of the volume driver
|
||||
configuration where ``volume_driver`` option is specified.
|
||||
|
||||
.. note::
|
||||
|
||||
Default value of ``zoning_mode`` is ``None`` and this needs to be
|
||||
changed to ``fabric`` to allow fabric zoning.
|
||||
|
||||
.. note::
|
||||
|
||||
``zoning_policy`` can be configured as ``initiator-target`` or
|
||||
``initiator``
|
||||
|
||||
Brocade Fibre Channel Zone Driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Brocade Fibre Channel Zone Driver performs zoning operations
|
||||
through HTTP, HTTPS, or SSH.
|
||||
|
||||
Set the following options in the ``cinder.conf`` configuration file.
|
||||
|
||||
.. include:: ../tables/cinder-zoning_manager_brcd.rst
|
||||
|
||||
Configure SAN fabric parameters in the form of fabric groups as
|
||||
described in the example below:
|
||||
|
||||
.. include:: ../tables/cinder-zoning_fabric_brcd.rst
|
||||
|
||||
.. note::
|
||||
|
||||
Define a fabric group for each fabric using the fabric names used in
|
||||
``fc_fabric_names`` configuration option as group name.
|
||||
|
||||
.. note::
|
||||
|
||||
To define a fabric group for a switch which has Virtual Fabrics
|
||||
enabled, include the ``fc_virtual_fabric_id`` configuration option
|
||||
and ``fc_southbound_protocol`` configuration option set to ``HTTP``
|
||||
or ``HTTPS`` in the fabric group. Zoning on VF enabled fabric using
|
||||
``SSH`` southbound protocol is not supported.
|
||||
|
||||
System requirements
|
||||
-------------------
|
||||
|
||||
Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or
|
||||
higher.
|
||||
|
||||
As a best practice for zone management, use a user account with
|
||||
``zoneadmin`` role. Users with ``admin`` role (including the default
|
||||
``admin`` user account) are limited to a maximum of two concurrent SSH
|
||||
sessions.
|
||||
|
||||
For information about how to manage Brocade Fibre Channel switches, see
|
||||
the Brocade Fabric OS user documentation.
|
||||
|
||||
Cisco Fibre Channel Zone Driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Cisco Fibre Channel Zone Driver automates the zoning operations through
|
||||
SSH. Configure Cisco Zone Driver, Cisco Southbound connector, FC SAN
|
||||
lookup service and Fabric name.
|
||||
|
||||
Set the following options in the ``cinder.conf`` configuration file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[fc-zone-manager]
|
||||
zone_driver = cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver.CiscoFCZoneDriver
|
||||
fc_san_lookup_service = cinder.zonemanager.drivers.cisco.cisco_fc_san_lookup_service.CiscoFCSanLookupService
|
||||
fc_fabric_names = CISCO_FABRIC_EXAMPLE
|
||||
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
|
||||
|
||||
.. include:: ../tables/cinder-zoning_manager_cisco.rst
|
||||
|
||||
Configure SAN fabric parameters in the form of fabric groups as
|
||||
described in the example below:
|
||||
|
||||
.. include:: ../tables/cinder-zoning_fabric_cisco.rst
|
||||
|
||||
.. note::
|
||||
|
||||
Define a fabric group for each fabric using the fabric names used in
|
||||
``fc_fabric_names`` configuration option as group name.
|
||||
|
||||
The Cisco Fibre Channel Zone Driver supports basic and enhanced
|
||||
zoning modes.The zoning VSAN must exist with an active zone set name
|
||||
which is same as the ``fc_fabric_names`` option.
|
||||
|
||||
System requirements
|
||||
-------------------
|
||||
|
||||
Cisco MDS 9000 Family Switches.
|
||||
|
||||
Cisco MDS NX-OS Release 6.2(9) or later.
|
||||
|
||||
For information about how to manage Cisco Fibre Channel switches, see
|
||||
the Cisco MDS 9000 user documentation.
|
28
doc/source/config-reference/block-storage/logs.rst
Normal file
28
doc/source/config-reference/block-storage/logs.rst
Normal file
@ -0,0 +1,28 @@
|
||||
===============================
|
||||
Log files used by Block Storage
|
||||
===============================
|
||||
|
||||
The corresponding log file of each Block Storage service is stored in
|
||||
the ``/var/log/cinder/`` directory of the host on which each service
|
||||
runs.
|
||||
|
||||
.. list-table:: **Log files used by Block Storage services**
|
||||
:header-rows: 1
|
||||
:widths: 10 20 10
|
||||
|
||||
* - Log file
|
||||
- Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise)
|
||||
- Service/interface (for Ubuntu and Debian)
|
||||
* - api.log
|
||||
- openstack-cinder-api
|
||||
- cinder-api
|
||||
* - cinder-manage.log
|
||||
- cinder-manage
|
||||
- cinder-manage
|
||||
* - scheduler.log
|
||||
- openstack-cinder-scheduler
|
||||
- cinder-scheduler
|
||||
* - volume.log
|
||||
- openstack-cinder-volume
|
||||
- cinder-volume
|
||||
|
165
doc/source/config-reference/block-storage/nested-quota.rst
Normal file
165
doc/source/config-reference/block-storage/nested-quota.rst
Normal file
@ -0,0 +1,165 @@
|
||||
=============
|
||||
Nested quotas
|
||||
=============
|
||||
|
||||
Nested quota is a change in how OpenStack services (such as Block Storage and
|
||||
Compute) handle their quota resources by being hierarchy-aware. The main
|
||||
reason for this change is to fully appreciate the hierarchical multi-tenancy
|
||||
concept, which was introduced in keystone in the Kilo release.
|
||||
|
||||
Once you have a project hierarchy created in keystone, nested quotas let you
|
||||
define how much of a project's quota you want to give to its subprojects. In
|
||||
that way, hierarchical projects can have hierarchical quotas (also known as
|
||||
nested quotas).
|
||||
|
||||
Projects and subprojects have similar behaviors, but they differ from each
|
||||
other when it comes to default quota values. The default quota value for
|
||||
resources in a subproject is 0, so that when a subproject is created it will
|
||||
not consume all of its parent's quota.
|
||||
|
||||
In order to keep track of how much of each quota was allocated to a
|
||||
subproject, a column ``allocated`` was added to the quotas table. This column
|
||||
is updated after every delete and update quota operation.
|
||||
|
||||
This example shows you how to use nested quotas.
|
||||
|
||||
.. note::
|
||||
|
||||
Assume that you have created a project hierarchy in keystone, such as
|
||||
follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------+
|
||||
| |
|
||||
| A |
|
||||
| / \ |
|
||||
| B C |
|
||||
| / |
|
||||
| D |
|
||||
+-----------+
|
||||
|
||||
Getting default quotas
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Get the quota for root projects.
|
||||
|
||||
Use the :command:`openstack quota show` command and specify:
|
||||
|
||||
- The ``PROJECT`` of the relevant project. In this case, the name of
|
||||
project A.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show PROJECT
|
||||
+----------------------+-------+
|
||||
| Field | Value |
|
||||
+----------------------+-------+
|
||||
| ... | ... |
|
||||
| backup_gigabytes | 1000 |
|
||||
| backups | 10 |
|
||||
| gigabytes | 1000 |
|
||||
| per_volume_gigabytes | -1 |
|
||||
| snapshots | 10 |
|
||||
| volumes | 10 |
|
||||
+----------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
This command returns the default values for resources.
|
||||
This is because the quotas for this project were not explicitly set.
|
||||
|
||||
#. Get the quota for subprojects.
|
||||
|
||||
In this case, use the same :command:`openstack quota show` command and
|
||||
specify:
|
||||
|
||||
- The ``PROJECT`` of the relevant project. In this case the name of
|
||||
project B, which is a child of A.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota show PROJECT
|
||||
+----------------------+-------+
|
||||
| Field | Value |
|
||||
+----------------------+-------+
|
||||
| ... | ... |
|
||||
| backup_gigabytes | 0 |
|
||||
| backups | 0 |
|
||||
| gigabytes | 0 |
|
||||
| per_volume_gigabytes | 0 |
|
||||
| snapshots | 0 |
|
||||
| volumes | 0 |
|
||||
+----------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
In this case, 0 was the value returned as the quota for all the
|
||||
resources. This is because project B is a subproject of A, thus,
|
||||
the default quota value is 0, so that it will not consume all the
|
||||
quota of its parent project.
|
||||
|
||||
Setting the quotas for subprojects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now that the projects were created, assume that the admin of project B wants
|
||||
to use it. First of all, you need to set the quota limit of the project,
|
||||
because as a subproject it does not have quotas allocated by default.
|
||||
|
||||
In this example, when all of the parent project is allocated to its
|
||||
subprojects the user will not be able to create more resources in the parent
|
||||
project.
|
||||
|
||||
#. Update the quota of B.
|
||||
|
||||
Use the :command:`openstack quota set` command and specify:
|
||||
|
||||
- The ``PROJECT`` of the relevant project.
|
||||
In this case the name of project B.
|
||||
|
||||
- The ``--volumes`` option, followed by the number to which you wish to
|
||||
increase the volumes quota.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack quota set --volumes 10 PROJECT
|
||||
+----------------------+-------+
|
||||
| Property | Value |
|
||||
+----------------------+-------+
|
||||
| ... | ... |
|
||||
| backup_gigabytes | 0 |
|
||||
| backups | 0 |
|
||||
| gigabytes | 0 |
|
||||
| per_volume_gigabytes | 0 |
|
||||
| snapshots | 0 |
|
||||
| volumes | 10 |
|
||||
+----------------------+-------+
|
||||
|
||||
.. note::
|
||||
|
||||
The volumes resource quota is updated.
|
||||
|
||||
#. Try to create a volume in project A.
|
||||
|
||||
Use the :command:`openstack volume create` command and specify:
|
||||
|
||||
- The ``SIZE`` of the volume that will be created;
|
||||
|
||||
- The ``NAME`` of the volume.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size SIZE NAME
|
||||
VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded for quota 'volumes'. (HTTP 413) (Request-ID: req-f6f7cc89-998e-4a82-803d-c73c8ee2016c)
|
||||
|
||||
.. note::
|
||||
|
||||
As the entirety of project A's volumes quota has been assigned to
|
||||
project B, it is treated as if all of the quota has been used. This
|
||||
is true even when project B has not created any volumes.
|
||||
|
||||
See `cinder nested quota spec
|
||||
<https://specs.openstack.org/openstack/cinder-specs/specs/liberty/cinder-nested-quota-driver.html>`_
|
||||
and `hierarchical multi-tenancy spec
|
||||
<https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy>`_
|
||||
for details.
|
@ -0,0 +1,10 @@
|
||||
=============
|
||||
api-paste.ini
|
||||
=============
|
||||
|
||||
Use the ``api-paste.ini`` file to configure the Block Storage API
|
||||
service.
|
||||
|
||||
.. remote-code-block:: none
|
||||
|
||||
https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/api-paste.ini?h=stable/ocata
|
@ -0,0 +1,15 @@
|
||||
===========
|
||||
cinder.conf
|
||||
===========
|
||||
|
||||
The ``cinder.conf`` file is installed in ``/etc/cinder`` by default.
|
||||
When you manually install the Block Storage service, the options in the
|
||||
``cinder.conf`` file are set to default values.
|
||||
|
||||
The ``cinder.conf`` file contains most of the options needed to configure
|
||||
the Block Storage service. You can generate the latest configuration file
|
||||
by using the tox provided by the Block Storage service. Here is a sample
|
||||
configuration file:
|
||||
|
||||
.. literalinclude:: ../../samples/cinder.conf.sample
|
||||
:language: ini
|
15
doc/source/config-reference/block-storage/samples/index.rst
Normal file
15
doc/source/config-reference/block-storage/samples/index.rst
Normal file
@ -0,0 +1,15 @@
|
||||
.. _block-storage-sample-configuration-file:
|
||||
|
||||
================================================
|
||||
Block Storage service sample configuration files
|
||||
================================================
|
||||
|
||||
All the files in this section can be found in ``/etc/cinder``.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
cinder.conf.rst
|
||||
api-paste.ini.rst
|
||||
policy.json.rst
|
||||
rootwrap.conf.rst
|
@ -0,0 +1,10 @@
|
||||
===========
|
||||
policy.json
|
||||
===========
|
||||
|
||||
The ``policy.json`` file defines additional access controls that apply
|
||||
to the Block Storage service.
|
||||
|
||||
.. remote-code-block:: none
|
||||
|
||||
https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/policy.json?h=stable/ocata
|
@ -0,0 +1,11 @@
|
||||
=============
|
||||
rootwrap.conf
|
||||
=============
|
||||
|
||||
The ``rootwrap.conf`` file defines configuration values used by the
|
||||
``rootwrap`` script when the Block Storage service must escalate its
|
||||
privileges to those of the root user.
|
||||
|
||||
.. remote-code-block:: ini
|
||||
|
||||
https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/rootwrap.conf?h=stable/ocata
|
11
doc/source/config-reference/block-storage/schedulers.rst
Normal file
11
doc/source/config-reference/block-storage/schedulers.rst
Normal file
@ -0,0 +1,11 @@
|
||||
========================
|
||||
Block Storage schedulers
|
||||
========================
|
||||
|
||||
Block Storage service uses the ``cinder-scheduler`` service
|
||||
to determine how to dispatch block storage requests.
|
||||
|
||||
For more information, see `Cinder Scheduler Filters
|
||||
<https://docs.openstack.org/developer/cinder/scheduler-filters.html>`_
|
||||
and `Cinder Scheduler Weights
|
||||
<https://docs.openstack.org/developer/cinder/scheduler-weights.html>`_.
|
78
doc/source/config-reference/block-storage/volume-drivers.rst
Normal file
78
doc/source/config-reference/block-storage/volume-drivers.rst
Normal file
@ -0,0 +1,78 @@
|
||||
==============
|
||||
Volume drivers
|
||||
==============
|
||||
|
||||
.. sort by the drivers by open source software
|
||||
.. and the drivers for proprietary components
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
drivers/ceph-rbd-volume-driver.rst
|
||||
drivers/lvm-volume-driver.rst
|
||||
drivers/nfs-volume-driver.rst
|
||||
drivers/sheepdog-driver.rst
|
||||
drivers/smbfs-volume-driver.rst
|
||||
drivers/blockbridge-eps-driver.rst
|
||||
drivers/cloudbyte-driver.rst
|
||||
drivers/coho-data-driver.rst
|
||||
drivers/coprhd-driver.rst
|
||||
drivers/datera-volume-driver.rst
|
||||
drivers/dell-emc-scaleio-driver.rst
|
||||
drivers/dell-emc-unity-driver.rst
|
||||
drivers/dell-equallogic-driver.rst
|
||||
drivers/dell-storagecenter-driver.rst
|
||||
drivers/dothill-driver.rst
|
||||
drivers/emc-vmax-driver.rst
|
||||
drivers/emc-vnx-driver.rst
|
||||
drivers/emc-xtremio-driver.rst
|
||||
drivers/falconstor-fss-driver.rst
|
||||
drivers/fujitsu-eternus-dx-driver.rst
|
||||
drivers/hds-hnas-driver.rst
|
||||
drivers/hitachi-storage-volume-driver.rst
|
||||
drivers/hpe-3par-driver.rst
|
||||
drivers/hpe-lefthand-driver.rst
|
||||
drivers/hp-msa-driver.rst
|
||||
drivers/huawei-storage-driver.rst
|
||||
drivers/ibm-gpfs-volume-driver.rst
|
||||
drivers/ibm-storwize-svc-driver.rst
|
||||
drivers/ibm-storage-volume-driver.rst
|
||||
drivers/ibm-flashsystem-volume-driver.rst
|
||||
drivers/infinidat-volume-driver.rst
|
||||
drivers/infortrend-volume-driver.rst
|
||||
drivers/itri-disco-driver.rst
|
||||
drivers/kaminario-driver.rst
|
||||
drivers/lenovo-driver.rst
|
||||
drivers/nec-storage-m-series-driver.rst
|
||||
drivers/netapp-volume-driver.rst
|
||||
drivers/nimble-volume-driver.rst
|
||||
drivers/nexentastor4-driver.rst
|
||||
drivers/nexentastor5-driver.rst
|
||||
drivers/nexentaedge-driver.rst
|
||||
drivers/prophetstor-dpl-driver.rst
|
||||
drivers/pure-storage-driver.rst
|
||||
drivers/quobyte-driver.rst
|
||||
drivers/scality-sofs-driver.rst
|
||||
drivers/solidfire-volume-driver.rst
|
||||
drivers/synology-dsm-driver.rst
|
||||
drivers/tintri-volume-driver.rst
|
||||
drivers/violin-v7000-driver.rst
|
||||
drivers/vzstorage-driver.rst
|
||||
drivers/vmware-vmdk-driver.rst
|
||||
drivers/windows-iscsi-volume-driver.rst
|
||||
drivers/xio-volume-driver.rst
|
||||
drivers/zadara-volume-driver.rst
|
||||
drivers/zfssa-iscsi-driver.rst
|
||||
drivers/zfssa-nfs-driver.rst
|
||||
drivers/zte-storage-driver.rst
|
||||
|
||||
To use different volume drivers for the cinder-volume service, use the
|
||||
parameters described in these sections.
|
||||
|
||||
The volume drivers are included in the `Block Storage repository
|
||||
<https://git.openstack.org/cgit/openstack/cinder/>`_. To set a volume
|
||||
driver, use the ``volume_driver`` flag. The default is:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
213
doc/source/config-reference/block-storage/volume-encryption.rst
Normal file
213
doc/source/config-reference/block-storage/volume-encryption.rst
Normal file
@ -0,0 +1,213 @@
|
||||
==============================================
|
||||
Volume encryption supported by the key manager
|
||||
==============================================
|
||||
|
||||
We recommend the Key management service (barbican) for storing
|
||||
encryption keys used by the OpenStack volume encryption feature. It can
|
||||
be enabled by updating ``cinder.conf`` and ``nova.conf``.
|
||||
|
||||
Initial configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configuration changes need to be made to any nodes running the
|
||||
``cinder-api`` or ``nova-compute`` server.
|
||||
|
||||
Steps to update ``cinder-api`` servers:
|
||||
|
||||
#. Edit the ``/etc/cinder/cinder.conf`` file to use Key management service
|
||||
as follows:
|
||||
|
||||
* Look for the ``[key_manager]`` section.
|
||||
|
||||
* Enter a new line directly below ``[key_manager]`` with the following:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager
|
||||
|
||||
#. Restart ``cinder-api``.
|
||||
|
||||
Update ``nova-compute`` servers:
|
||||
|
||||
#. Ensure the ``cryptsetup`` utility is installed, and install
|
||||
the ``python-barbicanclient`` Python package.
|
||||
|
||||
#. Set up the Key Manager service by editing ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[key_manager]
|
||||
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager
|
||||
|
||||
.. note::
|
||||
|
||||
Use a '#' prefix to comment out the line in this section that
|
||||
begins with 'fixed_key'.
|
||||
|
||||
#. Restart ``nova-compute``.
|
||||
|
||||
|
||||
Key management access control
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Special privileges can be assigned on behalf of an end user to allow
|
||||
them to manage their own encryption keys, which are required when
|
||||
creating the encrypted volumes. The Barbican `Default Policy
|
||||
<https://docs.openstack.org/developer/barbican/admin-guide-cloud/access_control.html#default-policy>`_
|
||||
for access control specifies that only users with an ``admin`` or
|
||||
``creator`` role can create keys. The policy is very flexible and
|
||||
can be modified.
|
||||
|
||||
To assign the ``creator`` role, the admin must know the user ID,
|
||||
project ID, and creator role ID. See `Assign a role
|
||||
<https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html#assign-a-role>`_
|
||||
for more information. An admin can list existing roles and associated
|
||||
IDs using the ``openstack role list`` command. If the creator
|
||||
role does not exist, the admin can `create the role
|
||||
<https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html#create-a-role>`_.
|
||||
|
||||
|
||||
Create an encrypted volume type
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Block Storage volume type assignment provides scheduling to a specific
|
||||
back-end, and can be used to specify actionable information for a
|
||||
back-end storage device.
|
||||
|
||||
This example creates a volume type called LUKS and provides
|
||||
configuration information for the storage system to encrypt or decrypt
|
||||
the volume.
|
||||
|
||||
#. Source your admin credentials:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc.sh
|
||||
|
||||
#. Create the volume type, marking the volume type as encrypted and providing
|
||||
the necessary details. Use ``--encryption-control-location`` to specify
|
||||
where encryption is performed: ``front-end`` (default) or ``back-end``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor \
|
||||
--encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LUKS
|
||||
|
||||
+-------------+----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------------------------------------+
|
||||
| description | None |
|
||||
| encryption | cipher='aes-xts-plain64', control_location='front-end', |
|
||||
| | encryption_id='8584c43f-1666-43d1-a348-45cfcef72898', |
|
||||
| | key_size='256', |
|
||||
| | provider='nova.volume.encryptors.luks.LuksEncryptor' |
|
||||
| id | b9a8cff5-2f60-40d1-8562-d33f3bf18312 |
|
||||
| is_public | True |
|
||||
| name | LUKS |
|
||||
+-------------+----------------------------------------------------------------+
|
||||
|
||||
The OpenStack dashboard (horizon) supports creating the encrypted
|
||||
volume type as of the Kilo release. For instructions, see
|
||||
`Create an encrypted volume type
|
||||
<https://docs.openstack.org/admin-guide/dashboard-manage-volumes.html>`_.
|
||||
|
||||
Create an encrypted volume
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the OpenStack dashboard (horizon), or :command:`openstack volume
|
||||
create` command to create volumes just as you normally would. For an
|
||||
encrypted volume, pass the ``--type LUKS`` flag, which specifies that the
|
||||
volume type will be ``LUKS`` (Linux Unified Key Setup). If that argument is
|
||||
left out, the default volume type, ``unencrypted``, is used.
|
||||
|
||||
#. Source your admin credentials:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc.sh
|
||||
|
||||
#. Create an unencrypted 1 GB test volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
||||
$ openstack volume create --size 1 'unencrypted volume'
|
||||
|
||||
|
||||
#. Create an encrypted 1 GB test volume:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 --type LUKS 'encrypted volume'
|
||||
|
||||
Notice the encrypted parameter; it will show ``True`` or ``False``.
|
||||
The option ``volume_type`` is also shown for easy review.
|
||||
|
||||
Non-admin users need the ``creator`` role to store secrets in Barbican
|
||||
and to create encrypted volumes. As an administrator, you can give a user
|
||||
the creator role in the following way:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project PROJECT --user USER creator
|
||||
|
||||
For details, see the
|
||||
`Barbican Access Control page
|
||||
<https://docs.openstack.org/developer/barbican/admin-guide-cloud/access_control.html>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
Due to the issue that some of the volume drivers do not set
|
||||
``encrypted`` flag, attaching of encrypted volumes to a virtual
|
||||
guest will fail, because OpenStack Compute service will not run
|
||||
encryption providers.
|
||||
|
||||
Testing volume encryption
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is a simple test scenario to help validate your encryption. It
|
||||
assumes an LVM based Block Storage server.
|
||||
|
||||
Perform these steps after completing the volume encryption setup and
|
||||
creating the volume-type for LUKS as described in the preceding
|
||||
sections.
|
||||
|
||||
#. Create a VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM
|
||||
|
||||
#. Create two volumes, one encrypted and one not encrypted then attach them
|
||||
to your VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 'unencrypted volume'
|
||||
$ openstack volume create --size 1 --type LUKS 'encrypted volume'
|
||||
$ openstack volume list
|
||||
$ openstack server add volume --device /dev/vdb TESTVM 'unencrypted volume'
|
||||
$ openstack server add volume --device /dev/vdc TESTVM 'encrypted volume'
|
||||
|
||||
#. On the VM, send some text to the newly attached volumes and synchronize
|
||||
them:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb
|
||||
# echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc
|
||||
# sync && sleep 2
|
||||
# sync && sleep 2
|
||||
|
||||
#. On the system hosting cinder volume services, synchronize to flush the
|
||||
I/O cache then test to see if your strings can be found:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# sync && sleep 2
|
||||
# sync && sleep 2
|
||||
# strings /dev/stack-volumes/volume-* | grep "Hello"
|
||||
Hello, world (unencrypted /dev/vdb)
|
||||
|
||||
In the above example you see that the search returns the string
|
||||
written to the unencrypted volume, but not the encrypted one.
|
BIN
doc/source/config-reference/figures/bb-cinder-fig1.png
Normal file
BIN
doc/source/config-reference/figures/bb-cinder-fig1.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 112 KiB |
BIN
doc/source/config-reference/figures/ceph-architecture.png
Normal file
BIN
doc/source/config-reference/figures/ceph-architecture.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
BIN
doc/source/config-reference/figures/emc-enabler.png
Normal file
BIN
doc/source/config-reference/figures/emc-enabler.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 42 KiB |
BIN
doc/source/config-reference/figures/ibm-storage-nova-concept.png
Normal file
BIN
doc/source/config-reference/figures/ibm-storage-nova-concept.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 132 KiB |
90
doc/source/config-reference/tables/cinder-api.rst
Normal file
90
doc/source/config-reference/tables/cinder-api.rst
Normal file
@ -0,0 +1,90 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-api:
|
||||
|
||||
.. list-table:: Description of API configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``api_rate_limit`` = ``True``
|
||||
- (Boolean) Enables or disables rate limit of the API.
|
||||
* - ``az_cache_duration`` = ``3600``
|
||||
- (Integer) Cache volume availability zones in memory for the provided duration in seconds
|
||||
* - ``backend_host`` = ``None``
|
||||
- (String) Backend override of host value.
|
||||
* - ``default_timeout`` = ``31536000``
|
||||
- (Integer) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long.
|
||||
* - ``enable_v1_api`` = ``False``
|
||||
- (Boolean) DEPRECATED: Deploy v1 of the Cinder API.
|
||||
* - ``enable_v2_api`` = ``True``
|
||||
- (Boolean) DEPRECATED: Deploy v2 of the Cinder API.
|
||||
* - ``enable_v3_api`` = ``True``
|
||||
- (Boolean) Deploy v3 of the Cinder API.
|
||||
* - ``extra_capabilities`` = ``{}``
|
||||
- (String) User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties.
|
||||
* - ``ignore_pool_full_threshold`` = ``False``
|
||||
- (Boolean) Force LUN creation even if the full threshold of pool is reached. By default, the value is False.
|
||||
* - ``management_ips`` =
|
||||
- (String) List of Management IP addresses (separated by commas)
|
||||
* - ``message_ttl`` = ``2592000``
|
||||
- (Integer) message minimum life in seconds.
|
||||
* - ``osapi_max_limit`` = ``1000``
|
||||
- (Integer) The maximum number of items that a collection resource returns in a single response
|
||||
* - ``osapi_volume_base_URL`` = ``None``
|
||||
- (String) Base URL that will be presented to users in links to the OpenStack Volume API
|
||||
* - ``osapi_volume_ext_list`` =
|
||||
- (List) Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions
|
||||
* - ``osapi_volume_extension`` = ``['cinder.api.contrib.standard_extensions']``
|
||||
- (Multi-valued) osapi volume extension to load
|
||||
* - ``osapi_volume_listen`` = ``0.0.0.0``
|
||||
- (String) IP address on which OpenStack Volume API listens
|
||||
* - ``osapi_volume_listen_port`` = ``8776``
|
||||
- (Port number) Port on which OpenStack Volume API listens
|
||||
* - ``osapi_volume_use_ssl`` = ``False``
|
||||
- (Boolean) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified.
|
||||
* - ``osapi_volume_workers`` = ``None``
|
||||
- (Integer) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available.
|
||||
* - ``per_volume_size_limit`` = ``-1``
|
||||
- (Integer) Max size allowed per volume, in gigabytes
|
||||
* - ``public_endpoint`` = ``None``
|
||||
- (String) Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL.
|
||||
* - ``query_volume_filters`` = ``name, status, metadata, availability_zone, bootable, group_id``
|
||||
- (List) Volume filter options which non-admin user could use to query volumes. Default values are: ['name', 'status', 'metadata', 'availability_zone' ,'bootable', 'group_id']
|
||||
* - ``transfer_api_class`` = ``cinder.transfer.api.API``
|
||||
- (String) The full class name of the volume transfer API class
|
||||
* - ``volume_api_class`` = ``cinder.volume.api.API``
|
||||
- (String) The full class name of the volume API class to use
|
||||
* - ``volume_name_prefix`` = ``openstack-``
|
||||
- (String) Prefix before volume name to differentiate DISCO volume created through openstack and the other ones
|
||||
* - ``volume_name_template`` = ``volume-%s``
|
||||
- (String) Template string to be used to generate volume names
|
||||
* - ``volume_number_multiplier`` = ``-1.0``
|
||||
- (Floating point) Multiplier used for weighing volume number. Negative numbers mean to spread vs stack.
|
||||
* - ``volume_transfer_key_length`` = ``16``
|
||||
- (Integer) The number of characters in the autogenerated auth key.
|
||||
* - ``volume_transfer_salt_length`` = ``8``
|
||||
- (Integer) The number of characters in the salt.
|
||||
* - **[oslo_middleware]**
|
||||
-
|
||||
* - ``enable_proxy_headers_parsing`` = ``False``
|
||||
- (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
|
||||
* - ``max_request_body_size`` = ``114688``
|
||||
- (Integer) The maximum body size for each request, in bytes.
|
||||
* - ``secure_proxy_ssl_header`` = ``X-Forwarded-Proto``
|
||||
- (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
|
||||
* - **[oslo_versionedobjects]**
|
||||
-
|
||||
* - ``fatal_exception_format_errors`` = ``False``
|
||||
- (Boolean) Make exception message format errors fatal
|
22
doc/source/config-reference/tables/cinder-auth.rst
Normal file
22
doc/source/config-reference/tables/cinder-auth.rst
Normal file
@ -0,0 +1,22 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-auth:
|
||||
|
||||
.. list-table:: Description of authorization configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``auth_strategy`` = ``keystone``
|
||||
- (String) The strategy to use for auth. Supports noauth or keystone.
|
48
doc/source/config-reference/tables/cinder-backups.rst
Normal file
48
doc/source/config-reference/tables/cinder-backups.rst
Normal file
@ -0,0 +1,48 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups:
|
||||
|
||||
.. list-table:: Description of backups configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_api_class`` = ``cinder.backup.api.API``
|
||||
- (String) The full class name of the volume backup API class
|
||||
* - ``backup_compression_algorithm`` = ``zlib``
|
||||
- (String) Compression algorithm (None to disable)
|
||||
* - ``backup_driver`` = ``cinder.backup.drivers.swift``
|
||||
- (String) Driver to use for backups.
|
||||
* - ``backup_manager`` = ``cinder.backup.manager.BackupManager``
|
||||
- (String) Full class name for the Manager for volume backup
|
||||
* - ``backup_metadata_version`` = ``2``
|
||||
- (Integer) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version.
|
||||
* - ``backup_name_template`` = ``backup-%s``
|
||||
- (String) Template string to be used to generate backup names
|
||||
* - ``backup_object_number_per_notification`` = ``10``
|
||||
- (Integer) The number of chunks or objects, for which one Ceilometer notification will be sent
|
||||
* - ``backup_service_inithost_offload`` = ``True``
|
||||
- (Boolean) Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted.
|
||||
* - ``backup_timer_interval`` = ``120``
|
||||
- (Integer) Interval, in seconds, between two progress notifications reporting the backup status
|
||||
* - ``backup_use_same_host`` = ``False``
|
||||
- (Boolean) Backup services use same backend.
|
||||
* - ``backup_use_temp_snapshot`` = ``False``
|
||||
- (Boolean) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path.
|
||||
* - ``snapshot_check_timeout`` = ``3600``
|
||||
- (Integer) How long we check whether a snapshot is finished before we give up
|
||||
* - ``snapshot_name_template`` = ``snapshot-%s``
|
||||
- (String) Template string to be used to generate snapshot names
|
||||
* - ``snapshot_same_host`` = ``True``
|
||||
- (Boolean) Create volume from snapshot at the host where snapshot resides
|
34
doc/source/config-reference/tables/cinder-backups_ceph.rst
Normal file
34
doc/source/config-reference/tables/cinder-backups_ceph.rst
Normal file
@ -0,0 +1,34 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_ceph:
|
||||
|
||||
.. list-table:: Description of Ceph backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_ceph_chunk_size`` = ``134217728``
|
||||
- (Integer) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store.
|
||||
* - ``backup_ceph_conf`` = ``/etc/ceph/ceph.conf``
|
||||
- (String) Ceph configuration file to use.
|
||||
* - ``backup_ceph_pool`` = ``backups``
|
||||
- (String) The Ceph pool where volume backups are stored.
|
||||
* - ``backup_ceph_stripe_count`` = ``0``
|
||||
- (Integer) RBD stripe count to use when creating a backup image.
|
||||
* - ``backup_ceph_stripe_unit`` = ``0``
|
||||
- (Integer) RBD stripe unit to use when creating a backup image.
|
||||
* - ``backup_ceph_user`` = ``cinder``
|
||||
- (String) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None.
|
||||
* - ``restore_discard_excess_bytes`` = ``True``
|
||||
- (Boolean) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.
|
48
doc/source/config-reference/tables/cinder-backups_gcs.rst
Normal file
48
doc/source/config-reference/tables/cinder-backups_gcs.rst
Normal file
@ -0,0 +1,48 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_gcs:
|
||||
|
||||
.. list-table:: Description of GCS backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_gcs_block_size`` = ``32768``
|
||||
- (Integer) The size in bytes that changes are tracked for incremental backups. backup_gcs_object_size has to be multiple of backup_gcs_block_size.
|
||||
* - ``backup_gcs_bucket`` = ``None``
|
||||
- (String) The GCS bucket to use.
|
||||
* - ``backup_gcs_bucket_location`` = ``US``
|
||||
- (String) Location of GCS bucket.
|
||||
* - ``backup_gcs_credential_file`` = ``None``
|
||||
- (String) Absolute path of GCS service account credential file.
|
||||
* - ``backup_gcs_enable_progress_timer`` = ``True``
|
||||
- (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer.
|
||||
* - ``backup_gcs_num_retries`` = ``3``
|
||||
- (Integer) Number of times to retry.
|
||||
* - ``backup_gcs_object_size`` = ``52428800``
|
||||
- (Integer) The size in bytes of GCS backup objects.
|
||||
* - ``backup_gcs_project_id`` = ``None``
|
||||
- (String) Owner project id for GCS bucket.
|
||||
* - ``backup_gcs_proxy_url`` = ``None``
|
||||
- (URI) URL for http proxy access.
|
||||
* - ``backup_gcs_reader_chunk_size`` = ``2097152``
|
||||
- (Integer) GCS object will be downloaded in chunks of bytes.
|
||||
* - ``backup_gcs_retry_error_codes`` = ``429``
|
||||
- (List) List of GCS error codes.
|
||||
* - ``backup_gcs_storage_class`` = ``NEARLINE``
|
||||
- (String) Storage class of GCS bucket.
|
||||
* - ``backup_gcs_user_agent`` = ``gcscinder``
|
||||
- (String) Http user-agent string for gcs api.
|
||||
* - ``backup_gcs_writer_chunk_size`` = ``2097152``
|
||||
- (Integer) GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the file is to be uploaded as a single chunk.
|
@ -0,0 +1,24 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_glusterfs:
|
||||
|
||||
.. list-table:: Description of GlusterFS backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``glusterfs_backup_mount_point`` = ``$state_path/backup_mount``
|
||||
- (String) Base dir containing mount point for gluster share.
|
||||
* - ``glusterfs_backup_share`` = ``None``
|
||||
- (String) GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol
|
34
doc/source/config-reference/tables/cinder-backups_nfs.rst
Normal file
34
doc/source/config-reference/tables/cinder-backups_nfs.rst
Normal file
@ -0,0 +1,34 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_nfs:
|
||||
|
||||
.. list-table:: Description of NFS backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_container`` = ``None``
|
||||
- (String) Custom directory to use for backups.
|
||||
* - ``backup_enable_progress_timer`` = ``True``
|
||||
- (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.
|
||||
* - ``backup_file_size`` = ``1999994880``
|
||||
- (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
|
||||
* - ``backup_mount_options`` = ``None``
|
||||
- (String) Mount options passed to the NFS client. See NFS man page for details.
|
||||
* - ``backup_mount_point_base`` = ``$state_path/backup_mount``
|
||||
- (String) Base dir containing mount point for NFS share.
|
||||
* - ``backup_sha_block_size_bytes`` = ``32768``
|
||||
- (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes.
|
||||
* - ``backup_share`` = ``None``
|
||||
- (String) NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
|
30
doc/source/config-reference/tables/cinder-backups_posix.rst
Normal file
30
doc/source/config-reference/tables/cinder-backups_posix.rst
Normal file
@ -0,0 +1,30 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_posix:
|
||||
|
||||
.. list-table:: Description of POSIX backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_container`` = ``None``
|
||||
- (String) Custom directory to use for backups.
|
||||
* - ``backup_enable_progress_timer`` = ``True``
|
||||
- (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.
|
||||
* - ``backup_file_size`` = ``1999994880``
|
||||
- (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
|
||||
* - ``backup_posix_path`` = ``$state_path/backup``
|
||||
- (String) Path specifying where to store backups.
|
||||
* - ``backup_sha_block_size_bytes`` = ``32768``
|
||||
- (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes.
|
56
doc/source/config-reference/tables/cinder-backups_swift.rst
Normal file
56
doc/source/config-reference/tables/cinder-backups_swift.rst
Normal file
@ -0,0 +1,56 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_swift:
|
||||
|
||||
.. list-table:: Description of Swift backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_swift_auth`` = ``per_user``
|
||||
- (String) Swift authentication mechanism
|
||||
* - ``backup_swift_auth_version`` = ``1``
|
||||
- (String) Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0 or "3" for auth 3.0
|
||||
* - ``backup_swift_block_size`` = ``32768``
|
||||
- (Integer) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size.
|
||||
* - ``backup_swift_ca_cert_file`` = ``None``
|
||||
- (String) Location of the CA certificate file to use for swift client requests.
|
||||
* - ``backup_swift_container`` = ``volumebackups``
|
||||
- (String) The default Swift container to use
|
||||
* - ``backup_swift_enable_progress_timer`` = ``True``
|
||||
- (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer.
|
||||
* - ``backup_swift_key`` = ``None``
|
||||
- (String) Swift key for authentication
|
||||
* - ``backup_swift_object_size`` = ``52428800``
|
||||
- (Integer) The size in bytes of Swift backup objects
|
||||
* - ``backup_swift_project`` = ``None``
|
||||
- (String) Swift project/account name. Required when connecting to an auth 3.0 system
|
||||
* - ``backup_swift_project_domain`` = ``None``
|
||||
- (String) Swift project domain name. Required when connecting to an auth 3.0 system
|
||||
* - ``backup_swift_retry_attempts`` = ``3``
|
||||
- (Integer) The number of retries to make for Swift operations
|
||||
* - ``backup_swift_retry_backoff`` = ``2``
|
||||
- (Integer) The backoff time in seconds between Swift retries
|
||||
* - ``backup_swift_tenant`` = ``None``
|
||||
- (String) Swift tenant/account name. Required when connecting to an auth 2.0 system
|
||||
* - ``backup_swift_url`` = ``None``
|
||||
- (URI) The URL of the Swift endpoint
|
||||
* - ``backup_swift_user`` = ``None``
|
||||
- (String) Swift user name
|
||||
* - ``backup_swift_user_domain`` = ``None``
|
||||
- (String) Swift user domain name. Required when connecting to an auth 3.0 system
|
||||
* - ``keystone_catalog_info`` = ``identity:Identity Service:publicURL``
|
||||
- (String) Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset
|
||||
* - ``swift_catalog_info`` = ``object-store:swift:publicURL``
|
||||
- (String) Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset
|
26
doc/source/config-reference/tables/cinder-backups_tsm.rst
Normal file
26
doc/source/config-reference/tables/cinder-backups_tsm.rst
Normal file
@ -0,0 +1,26 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-backups_tsm:
|
||||
|
||||
.. list-table:: Description of IBM Tivoli Storage Manager backup driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``backup_tsm_compression`` = ``True``
|
||||
- (Boolean) Enable or Disable compression for backups
|
||||
* - ``backup_tsm_password`` = ``password``
|
||||
- (String) TSM password for the running username
|
||||
* - ``backup_tsm_volume_prefix`` = ``backup``
|
||||
- (String) Volume prefix for the backup id when backing up to TSM
|
22
doc/source/config-reference/tables/cinder-block-device.rst
Normal file
22
doc/source/config-reference/tables/cinder-block-device.rst
Normal file
@ -0,0 +1,22 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-block-device:
|
||||
|
||||
.. list-table:: Description of block device configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``available_devices`` =
|
||||
- (List) List of all available devices
|
36
doc/source/config-reference/tables/cinder-blockbridge.rst
Normal file
36
doc/source/config-reference/tables/cinder-blockbridge.rst
Normal file
@ -0,0 +1,36 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-blockbridge:
|
||||
|
||||
.. list-table:: Description of BlockBridge EPS volume driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``blockbridge_api_host`` = ``None``
|
||||
- (String) IP address/hostname of Blockbridge API.
|
||||
* - ``blockbridge_api_port`` = ``None``
|
||||
- (Integer) Override HTTPS port to connect to Blockbridge API server.
|
||||
* - ``blockbridge_auth_password`` = ``None``
|
||||
- (String) Blockbridge API password (for auth scheme 'password')
|
||||
* - ``blockbridge_auth_scheme`` = ``token``
|
||||
- (String) Blockbridge API authentication scheme (token or password)
|
||||
* - ``blockbridge_auth_token`` = ``None``
|
||||
- (String) Blockbridge API token (for auth scheme 'token')
|
||||
* - ``blockbridge_auth_user`` = ``None``
|
||||
- (String) Blockbridge API user (for auth scheme 'password')
|
||||
* - ``blockbridge_default_pool`` = ``None``
|
||||
- (String) Default pool name if unspecified.
|
||||
* - ``blockbridge_pools`` = ``{'OpenStack': '+openstack'}``
|
||||
- (Dict) Defines the set of exposed pools and their associated backend query strings
|
44
doc/source/config-reference/tables/cinder-cloudbyte.rst
Normal file
44
doc/source/config-reference/tables/cinder-cloudbyte.rst
Normal file
@ -0,0 +1,44 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-cloudbyte:
|
||||
|
||||
.. list-table:: Description of CloudByte volume driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``cb_account_name`` = ``None``
|
||||
- (String) CloudByte storage specific account name. This maps to a project name in OpenStack.
|
||||
* - ``cb_add_qosgroup`` = ``{'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'}``
|
||||
- (Dict) These values will be used for CloudByte storage's addQos API call.
|
||||
* - ``cb_apikey`` = ``None``
|
||||
- (String) Driver will use this API key to authenticate against the CloudByte storage's management interface.
|
||||
* - ``cb_auth_group`` = ``None``
|
||||
- (String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
|
||||
* - ``cb_confirm_volume_create_retries`` = ``3``
|
||||
- (Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts.
|
||||
* - ``cb_confirm_volume_create_retry_interval`` = ``5``
|
||||
- (Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage.
|
||||
* - ``cb_confirm_volume_delete_retries`` = ``3``
|
||||
- (Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
|
||||
* - ``cb_confirm_volume_delete_retry_interval`` = ``5``
|
||||
- (Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
|
||||
* - ``cb_create_volume`` = ``{'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'}``
|
||||
- (Dict) These values will be used for CloudByte storage's createVolume API call.
|
||||
* - ``cb_tsm_name`` = ``None``
|
||||
- (String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM.
|
||||
* - ``cb_update_file_system`` = ``compression, sync, noofcopies, readonly``
|
||||
- (List) These values will be used for CloudByte storage's updateFileSystem API call.
|
||||
* - ``cb_update_qos_group`` = ``iops, latency, graceallowed``
|
||||
- (List) These values will be used for CloudByte storage's updateQosGroup API call.
|
22
doc/source/config-reference/tables/cinder-coho.rst
Normal file
22
doc/source/config-reference/tables/cinder-coho.rst
Normal file
@ -0,0 +1,22 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-coho:
|
||||
|
||||
.. list-table:: Description of Coho volume driver configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``coho_rpc_port`` = ``2049``
|
||||
- (Integer) RPC port to connect to Coho Data MicroArray
|
162
doc/source/config-reference/tables/cinder-common.rst
Normal file
162
doc/source/config-reference/tables/cinder-common.rst
Normal file
@ -0,0 +1,162 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-common:
|
||||
|
||||
.. list-table:: Description of common configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``allow_availability_zone_fallback`` = ``False``
|
||||
- (Boolean) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing.
|
||||
* - ``chap`` = ``disabled``
|
||||
- (String) CHAP authentication mode, effective only for iscsi (disabled|enabled)
|
||||
* - ``chap_password`` =
|
||||
- (String) Password for specified CHAP account name.
|
||||
* - ``chap_username`` =
|
||||
- (String) CHAP user name.
|
||||
* - ``chiscsi_conf`` = ``/etc/chelsio-iscsi/chiscsi.conf``
|
||||
- (String) Chiscsi (CXT) global defaults configuration file
|
||||
* - ``cinder_internal_tenant_project_id`` = ``None``
|
||||
- (String) ID of the project which will be used as the Cinder internal tenant.
|
||||
* - ``cinder_internal_tenant_user_id`` = ``None``
|
||||
- (String) ID of the user to be used in volume operations as the Cinder internal tenant.
|
||||
* - ``cluster`` = ``None``
|
||||
- (String) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported.
|
||||
* - ``compute_api_class`` = ``cinder.compute.nova.API``
|
||||
- (String) The full class name of the compute API class to use
|
||||
* - ``connection_type`` = ``iscsi``
|
||||
- (String) Connection type to the IBM Storage Array
|
||||
* - ``consistencygroup_api_class`` = ``cinder.consistencygroup.api.API``
|
||||
- (String) The full class name of the consistencygroup API class
|
||||
* - ``default_availability_zone`` = ``None``
|
||||
- (String) Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes.
|
||||
* - ``default_group_type`` = ``None``
|
||||
- (String) Default group type to use
|
||||
* - ``default_volume_type`` = ``None``
|
||||
- (String) Default volume type to use
|
||||
* - ``driver_client_cert`` = ``None``
|
||||
- (String) The path to the client certificate for verification, if the driver supports it.
|
||||
* - ``driver_client_cert_key`` = ``None``
|
||||
- (String) The path to the client certificate key for verification, if the driver supports it.
|
||||
* - ``driver_data_namespace`` = ``None``
|
||||
- (String) Namespace for driver private data values to be saved in.
|
||||
* - ``driver_ssl_cert_path`` = ``None``
|
||||
- (String) Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend
|
||||
* - ``driver_ssl_cert_verify`` = ``False``
|
||||
- (Boolean) If set to True the http client will validate the SSL certificate of the backend endpoint.
|
||||
* - ``enable_force_upload`` = ``False``
|
||||
- (Boolean) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it.
|
||||
* - ``enable_new_services`` = ``True``
|
||||
- (Boolean) Services to be added to the available pool on create
|
||||
* - ``enable_unsupported_driver`` = ``False``
|
||||
- (Boolean) Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release.
|
||||
* - ``end_time`` = ``None``
|
||||
- (String) If this option is specified then the end time specified is used instead of the end time of the last completed audit period.
|
||||
* - ``enforce_multipath_for_image_xfer`` = ``False``
|
||||
- (Boolean) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path.
|
||||
* - ``executor_thread_pool_size`` = ``64``
|
||||
- (Integer) Size of executor thread pool.
|
||||
* - ``fatal_exception_format_errors`` = ``False``
|
||||
- (Boolean) Make exception message format errors fatal.
|
||||
* - ``group_api_class`` = ``cinder.group.api.API``
|
||||
- (String) The full class name of the group API class
|
||||
* - ``host`` = ``localhost``
|
||||
- (String) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address.
|
||||
* - ``iet_conf`` = ``/etc/iet/ietd.conf``
|
||||
- (String) IET configuration file
|
||||
* - ``iscsi_secondary_ip_addresses`` =
|
||||
- (List) The list of secondary IP addresses of the iSCSI daemon
|
||||
* - ``max_over_subscription_ratio`` = ``20.0``
|
||||
- (Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. The ratio has to be a minimum of 1.0.
|
||||
* - ``monkey_patch`` = ``False``
|
||||
- (Boolean) Enable monkey patching
|
||||
* - ``monkey_patch_modules`` =
|
||||
- (List) List of modules/decorators to monkey patch
|
||||
* - ``my_ip`` = ``10.0.0.1``
|
||||
- (String) IP address of this host
|
||||
* - ``no_snapshot_gb_quota`` = ``False``
|
||||
- (Boolean) Whether snapshots count against gigabyte quota
|
||||
* - ``num_shell_tries`` = ``3``
|
||||
- (Integer) Number of times to attempt to run flakey shell commands
|
||||
* - ``os_privileged_user_auth_url`` = ``None``
|
||||
- (URI) Auth URL associated with the OpenStack privileged account.
|
||||
* - ``os_privileged_user_name`` = ``None``
|
||||
- (String) OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights.
|
||||
* - ``os_privileged_user_password`` = ``None``
|
||||
- (String) Password associated with the OpenStack privileged account.
|
||||
* - ``os_privileged_user_tenant`` = ``None``
|
||||
- (String) Tenant name associated with the OpenStack privileged account.
|
||||
* - ``periodic_fuzzy_delay`` = ``60``
|
||||
- (Integer) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
|
||||
* - ``periodic_interval`` = ``60``
|
||||
- (Integer) Interval, in seconds, between running periodic tasks
|
||||
* - ``replication_device`` = ``None``
|
||||
- (Unknown) Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2...
|
||||
* - ``report_discard_supported`` = ``False``
|
||||
- (Boolean) Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used.
|
||||
* - ``report_interval`` = ``10``
|
||||
- (Integer) Interval, in seconds, between nodes reporting state to datastore
|
||||
* - ``reserved_percentage`` = ``0``
|
||||
- (Integer) The percentage of backend capacity is reserved
|
||||
* - ``rootwrap_config`` = ``/etc/cinder/rootwrap.conf``
|
||||
- (String) Path to the rootwrap configuration file to use for running commands as root
|
||||
* - ``send_actions`` = ``False``
|
||||
- (Boolean) Send the volume and snapshot create and delete notifications generated in the specified period.
|
||||
* - ``service_down_time`` = ``60``
|
||||
- (Integer) Maximum time since last check-in for a service to be considered up
|
||||
* - ``ssh_hosts_key_file`` = ``$state_path/ssh_known_hosts``
|
||||
- (String) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts
|
||||
* - ``start_time`` = ``None``
|
||||
- (String) If this option is specified then the start time specified is used instead of the start time of the last completed audit period.
|
||||
* - ``state_path`` = ``/var/lib/cinder``
|
||||
- (String) Top-level directory for maintaining cinder's state
|
||||
* - ``storage_availability_zone`` = ``nova``
|
||||
- (String) Availability zone of this node
|
||||
* - ``storage_protocol`` = ``iscsi``
|
||||
- (String) Protocol for transferring data between host and storage back-end.
|
||||
* - ``strict_ssh_host_key_policy`` = ``False``
|
||||
- (Boolean) Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False
|
||||
* - ``suppress_requests_ssl_warnings`` = ``False``
|
||||
- (Boolean) Suppress requests library SSL certificate warnings.
|
||||
* - ``tcp_keepalive`` = ``True``
|
||||
- (Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket.
|
||||
* - ``tcp_keepalive_count`` = ``None``
|
||||
- (Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
|
||||
* - ``tcp_keepalive_interval`` = ``None``
|
||||
- (Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X.
|
||||
* - ``until_refresh`` = ``0``
|
||||
- (Integer) Count of reservations until usage is refreshed
|
||||
* - ``use_chap_auth`` = ``False``
|
||||
- (Boolean) Option to enable/disable CHAP authentication for targets.
|
||||
* - ``use_forwarded_for`` = ``False``
|
||||
- (Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.
|
||||
* - **[healthcheck]**
|
||||
-
|
||||
* - ``backends`` =
|
||||
- (List) Additional backends that can perform health checks and report that information back as part of a request.
|
||||
* - ``detailed`` = ``False``
|
||||
- (Boolean) Show more detailed information as part of the response
|
||||
* - ``disable_by_file_path`` = ``None``
|
||||
- (String) Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin.
|
||||
* - ``disable_by_file_paths`` =
|
||||
- (List) Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin.
|
||||
* - ``path`` = ``/healthcheck``
|
||||
- (String) DEPRECATED: The path to respond to healtcheck requests on.
|
||||
* - **[key_manager]**
|
||||
-
|
||||
* - ``api_class`` = ``castellan.key_manager.barbican_key_manager.BarbicanKeyManager``
|
||||
- (String) The full class name of the key manager API class
|
||||
* - ``fixed_key`` = ``None``
|
||||
- (String) Fixed key returned by key manager, specified in hex
|
34
doc/source/config-reference/tables/cinder-compute.rst
Normal file
34
doc/source/config-reference/tables/cinder-compute.rst
Normal file
@ -0,0 +1,34 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-compute:
|
||||
|
||||
.. list-table:: Description of Compute configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``nova_api_insecure`` = ``False``
|
||||
- (Boolean) Allow to perform insecure SSL requests to nova
|
||||
* - ``nova_ca_certificates_file`` = ``None``
|
||||
- (String) Location of ca certificates file to use for nova client requests.
|
||||
* - ``nova_catalog_admin_info`` = ``compute:Compute Service:adminURL``
|
||||
- (String) Same as nova_catalog_info, but for admin endpoint.
|
||||
* - ``nova_catalog_info`` = ``compute:Compute Service:publicURL``
|
||||
- (String) Match this value when searching for nova in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type>
|
||||
* - ``nova_endpoint_admin_template`` = ``None``
|
||||
- (String) Same as nova_endpoint_template, but for admin endpoint.
|
||||
* - ``nova_endpoint_template`` = ``None``
|
||||
- (String) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/%(project_id)s
|
||||
* - ``os_region_name`` = ``None``
|
||||
- (String) Region name of this node
|
28
doc/source/config-reference/tables/cinder-coordination.rst
Normal file
28
doc/source/config-reference/tables/cinder-coordination.rst
Normal file
@ -0,0 +1,28 @@
|
||||
..
|
||||
Warning: Do not edit this file. It is automatically generated from the
|
||||
software project's code and your changes will be overwritten.
|
||||
|
||||
The tool to generate this file lives in openstack-doc-tools repository.
|
||||
|
||||
Please make any changes needed in the code, then run the
|
||||
autogenerate-config-doc tool from the openstack-doc-tools repository, or
|
||||
ask for help on the documentation mailing list, IRC channel or meeting.
|
||||
|
||||
.. _cinder-coordination:
|
||||
|
||||
.. list-table:: Description of Coordination configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[coordination]**
|
||||
-
|
||||
* - ``backend_url`` = ``file://$state_path``
|
||||
- (String) The backend URL to use for distributed coordination.
|
||||
* - ``heartbeat`` = ``1.0``
|
||||
- (Floating point) Number of seconds between heartbeats for distributed coordination.
|
||||
* - ``initial_reconnect_backoff`` = ``0.1``
|
||||
- (Floating point) Initial number of seconds to wait after failed reconnection.
|
||||
* - ``max_reconnect_backoff`` = ``60.0``
|
||||
- (Floating point) Maximum number of seconds between sequential reconnection retries.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user