diff --git a/doc/source/config-reference/block-storage.rst b/doc/source/config-reference/block-storage.rst new file mode 100644 index 00000000000..93c1d6dd0f4 --- /dev/null +++ b/doc/source/config-reference/block-storage.rst @@ -0,0 +1,27 @@ +=================================== +Block Storage Service Configuration +=================================== + +.. toctree:: + :maxdepth: 1 + + block-storage/block-storage-overview.rst + block-storage/volume-drivers.rst + block-storage/backup-drivers.rst + block-storage/schedulers.rst + block-storage/logs.rst + block-storage/fc-zoning.rst + block-storage/nested-quota.rst + block-storage/volume-encryption.rst + block-storage/config-options.rst + block-storage/samples/index.rst + tables/conf-changes/cinder.rst + +.. note:: + + The common configurations for shared service and libraries, + such as database connections and RPC messaging, + are described at :doc:`common-configurations`. + +The Block Storage service works with many different storage +drivers that you can configure by using these instructions. diff --git a/doc/source/config-reference/block-storage/backup-drivers.rst b/doc/source/config-reference/block-storage/backup-drivers.rst new file mode 100644 index 00000000000..19c9780e90e --- /dev/null +++ b/doc/source/config-reference/block-storage/backup-drivers.rst @@ -0,0 +1,24 @@ +============== +Backup drivers +============== + +.. sort by the drivers by open source software +.. and the drivers for proprietary components + +.. toctree:: + + backup/ceph-backup-driver.rst + backup/glusterfs-backup-driver.rst + backup/nfs-backup-driver.rst + backup/posix-backup-driver.rst + backup/swift-backup-driver.rst + backup/gcs-backup-driver.rst + backup/tsm-backup-driver.rst + +This section describes how to configure the cinder-backup service and +its drivers. + +The volume drivers are included with the `Block Storage repository +`_. To set a backup +driver, use the ``backup_driver`` flag. By default there is no backup +driver enabled. diff --git a/doc/source/config-reference/block-storage/backup/ceph-backup-driver.rst b/doc/source/config-reference/block-storage/backup/ceph-backup-driver.rst new file mode 100644 index 00000000000..44fa87d7ce4 --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/ceph-backup-driver.rst @@ -0,0 +1,56 @@ +================== +Ceph backup driver +================== + +The Ceph backup driver backs up volumes of any type to a Ceph back-end +store. The driver can also detect whether the volume to be backed up is +a Ceph RBD volume, and if so, it tries to perform incremental and +differential backups. + +For source Ceph RBD volumes, you can perform backups within the same +Ceph pool (not recommended). You can also perform backups between +different Ceph pools and between different Ceph clusters. + +At the time of writing, differential backup support in Ceph/librbd was +quite new. This driver attempts a differential backup in the first +instance. If the differential backup fails, the driver falls back to +full backup/copy. + +If incremental backups are used, multiple backups of the same volume are +stored as snapshots so that minimal space is consumed in the backup +store. It takes far less time to restore a volume than to take a full +copy. + +.. note:: + + Block Storage enables you to: + + - Restore to a new volume, which is the default and recommended + action. + + - Restore to the original volume from which the backup was taken. + The restore action takes a full copy because this is the safest + action. + +To enable the Ceph backup driver, include the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.ceph + +The following configuration options are available for the Ceph backup +driver. + +.. include:: ../../tables/cinder-backups_ceph.rst + +This example shows the default options for the Ceph backup driver. + +.. code-block:: ini + + backup_ceph_conf=/etc/ceph/ceph.conf + backup_ceph_user = cinder-backup + backup_ceph_chunk_size = 134217728 + backup_ceph_pool = backups + backup_ceph_stripe_unit = 0 + backup_ceph_stripe_count = 0 diff --git a/doc/source/config-reference/block-storage/backup/gcs-backup-driver.rst b/doc/source/config-reference/block-storage/backup/gcs-backup-driver.rst new file mode 100644 index 00000000000..ca6b96962c8 --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/gcs-backup-driver.rst @@ -0,0 +1,18 @@ +======================================= +Google Cloud Storage backup driver +======================================= + +The Google Cloud Storage (GCS) backup driver backs up volumes of any type to +Google Cloud Storage. + +To enable the GCS backup driver, include the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.google + +The following configuration options are available for the GCS backup +driver. + +.. include:: ../../tables/cinder-backups_gcs.rst diff --git a/doc/source/config-reference/block-storage/backup/glusterfs-backup-driver.rst b/doc/source/config-reference/block-storage/backup/glusterfs-backup-driver.rst new file mode 100644 index 00000000000..9980702d67b --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/glusterfs-backup-driver.rst @@ -0,0 +1,17 @@ +======================= +GlusterFS backup driver +======================= + +The GlusterFS backup driver backs up volumes of any type to GlusterFS. + +To enable the GlusterFS backup driver, include the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.glusterfs + +The following configuration options are available for the GlusterFS backup +driver. + +.. include:: ../../tables/cinder-backups_glusterfs.rst diff --git a/doc/source/config-reference/block-storage/backup/nfs-backup-driver.rst b/doc/source/config-reference/block-storage/backup/nfs-backup-driver.rst new file mode 100644 index 00000000000..bd6c19273ce --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/nfs-backup-driver.rst @@ -0,0 +1,18 @@ +================= +NFS backup driver +================= + +The backup driver for the NFS back end backs up volumes of any type to +an NFS exported backup repository. + +To enable the NFS backup driver, include the following option in the +``[DEFAULT]`` section of the ``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.nfs + +The following configuration options are available for the NFS back-end +backup driver. + +.. include:: ../../tables/cinder-backups_nfs.rst diff --git a/doc/source/config-reference/block-storage/backup/posix-backup-driver.rst b/doc/source/config-reference/block-storage/backup/posix-backup-driver.rst new file mode 100644 index 00000000000..18dfd0b2868 --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/posix-backup-driver.rst @@ -0,0 +1,18 @@ +================================ +POSIX file systems backup driver +================================ + +The POSIX file systems backup driver backs up volumes of any type to +POSIX file systems. + +To enable the POSIX file systems backup driver, include the following +option in the ``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.posix + +The following configuration options are available for the POSIX +file systems backup driver. + +.. include:: ../../tables/cinder-backups_posix.rst diff --git a/doc/source/config-reference/block-storage/backup/swift-backup-driver.rst b/doc/source/config-reference/block-storage/backup/swift-backup-driver.rst new file mode 100644 index 00000000000..2a1cc03cbc9 --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/swift-backup-driver.rst @@ -0,0 +1,52 @@ +=================== +Swift backup driver +=================== + +The backup driver for the swift back end performs a volume backup to an +object storage system. + +To enable the swift backup driver, include the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.swift + +The following configuration options are available for the Swift back-end +backup driver. + +.. include:: ../../tables/cinder-backups_swift.rst + +To enable the swift backup driver for 1.0, 2.0, or 3.0 authentication version, +specify ``1``, ``2``, or ``3`` correspondingly. For example: + +.. code-block:: ini + + backup_swift_auth_version = 2 + +In addition, the 2.0 authentication system requires the definition of the +``backup_swift_tenant`` setting: + +.. code-block:: ini + + backup_swift_tenant = + +This example shows the default options for the Swift back-end backup +driver. + +.. code-block:: ini + + backup_swift_url = http://localhost:8080/v1/AUTH_ + backup_swift_auth_url = http://localhost:5000/v3 + backup_swift_auth = per_user + backup_swift_auth_version = 1 + backup_swift_user = + backup_swift_user_domain = + backup_swift_key = + backup_swift_container = volumebackups + backup_swift_object_size = 52428800 + backup_swift_project = + backup_swift_project_domain = + backup_swift_retry_attempts = 3 + backup_swift_retry_backoff = 2 + backup_compression_algorithm = zlib diff --git a/doc/source/config-reference/block-storage/backup/tsm-backup-driver.rst b/doc/source/config-reference/block-storage/backup/tsm-backup-driver.rst new file mode 100644 index 00000000000..1ab1294cfef --- /dev/null +++ b/doc/source/config-reference/block-storage/backup/tsm-backup-driver.rst @@ -0,0 +1,31 @@ +======================================== +IBM Tivoli Storage Manager backup driver +======================================== + +The IBM Tivoli Storage Manager (TSM) backup driver enables performing +volume backups to a TSM server. + +The TSM client should be installed and configured on the machine running +the cinder-backup service. See the IBM Tivoli Storage Manager +Backup-Archive Client Installation and User's Guide for details on +installing the TSM client. + +To enable the IBM TSM backup driver, include the following option in +``cinder.conf``: + +.. code-block:: ini + + backup_driver = cinder.backup.drivers.tsm + +The following configuration options are available for the TSM backup +driver. + +.. include:: ../../tables/cinder-backups_tsm.rst + +This example shows the default options for the TSM backup driver. + +.. code-block:: ini + + backup_tsm_volume_prefix = backup + backup_tsm_password = password + backup_tsm_compression = True diff --git a/doc/source/config-reference/block-storage/block-storage-overview.rst b/doc/source/config-reference/block-storage/block-storage-overview.rst new file mode 100644 index 00000000000..d06609ec7da --- /dev/null +++ b/doc/source/config-reference/block-storage/block-storage-overview.rst @@ -0,0 +1,89 @@ +========================================= +Introduction to the Block Storage service +========================================= + +The Block Storage service provides persistent block storage +resources that Compute instances can consume. This includes +secondary attached storage similar to the Amazon Elastic Block Storage +(EBS) offering. In addition, you can write images to a Block Storage +device for Compute to use as a bootable persistent instance. + +The Block Storage service differs slightly from the Amazon EBS offering. +The Block Storage service does not provide a shared storage solution +like NFS. With the Block Storage service, you can attach a device to +only one instance. + +The Block Storage service provides: + +- ``cinder-api`` - a WSGI app that authenticates and routes requests + throughout the Block Storage service. It supports the OpenStack APIs + only, although there is a translation that can be done through + Compute's EC2 interface, which calls in to the Block Storage client. + +- ``cinder-scheduler`` - schedules and routes requests to the appropriate + volume service. Depending upon your configuration, this may be simple + round-robin scheduling to the running volume services, or it can be + more sophisticated through the use of the Filter Scheduler. The + Filter Scheduler is the default and enables filters on things like + Capacity, Availability Zone, Volume Types, and Capabilities as well + as custom filters. + +- ``cinder-volume`` - manages Block Storage devices, specifically the + back-end devices themselves. + +- ``cinder-backup`` - provides a means to back up a Block Storage volume to + OpenStack Object Storage (swift). + +The Block Storage service contains the following components: + +- **Back-end Storage Devices** - the Block Storage service requires some + form of back-end storage that the service is built on. The default + implementation is to use LVM on a local volume group named + "cinder-volumes." In addition to the base driver implementation, the + Block Storage service also provides the means to add support for + other storage devices to be utilized such as external Raid Arrays or + other storage appliances. These back-end storage devices may have + custom block sizes when using KVM or QEMU as the hypervisor. + +- **Users and Tenants (Projects)** - the Block Storage service can be + used by many different cloud computing consumers or customers + (tenants on a shared system), using role-based access assignments. + Roles control the actions that a user is allowed to perform. In the + default configuration, most actions do not require a particular role, + but this can be configured by the system administrator in the + appropriate ``policy.json`` file that maintains the rules. A user's + access to particular volumes is limited by tenant, but the user name + and password are assigned per user. Key pairs granting access to a + volume are enabled per user, but quotas to control resource + consumption across available hardware resources are per tenant. + + For tenants, quota controls are available to limit: + + - The number of volumes that can be created. + + - The number of snapshots that can be created. + + - The total number of GBs allowed per tenant (shared between + snapshots and volumes). + + You can revise the default quota values with the Block Storage CLI, + so the limits placed by quotas are editable by admin users. + +- **Volumes, Snapshots, and Backups** - the basic resources offered by + the Block Storage service are volumes and snapshots which are derived + from volumes and volume backups: + + - **Volumes** - allocated block storage resources that can be + attached to instances as secondary storage or they can be used as + the root store to boot instances. Volumes are persistent R/W block + storage devices most commonly attached to the compute node through + iSCSI. + + - **Snapshots** - a read-only point in time copy of a volume. The + snapshot can be created from a volume that is currently in use + (through the use of ``--force True``) or in an available state. + The snapshot can then be used to create a new volume through + create from snapshot. + + - **Backups** - an archived copy of a volume currently stored in + Object Storage (swift). diff --git a/doc/source/config-reference/block-storage/config-options.rst b/doc/source/config-reference/block-storage/config-options.rst new file mode 100644 index 00000000000..ff2e5269d18 --- /dev/null +++ b/doc/source/config-reference/block-storage/config-options.rst @@ -0,0 +1,35 @@ +================== +Additional options +================== + +These options can also be set in the ``cinder.conf`` file. + +.. include:: ../tables/cinder-api.rst +.. include:: ../tables/cinder-auth.rst +.. include:: ../tables/cinder-backups.rst +.. include:: ../tables/cinder-block-device.rst +.. include:: ../tables/cinder-common.rst +.. include:: ../tables/cinder-compute.rst +.. include:: ../tables/cinder-coordination.rst +.. include:: ../tables/cinder-debug.rst +.. include:: ../tables/cinder-drbd.rst +.. include:: ../tables/cinder-emc.rst +.. include:: ../tables/cinder-eternus.rst +.. include:: ../tables/cinder-flashsystem.rst +.. include:: ../tables/cinder-hgst.rst +.. include:: ../tables/cinder-hpelefthand.rst +.. include:: ../tables/cinder-hpexp.rst +.. include:: ../tables/cinder-huawei.rst +.. include:: ../tables/cinder-hyperv.rst +.. include:: ../tables/cinder-images.rst +.. include:: ../tables/cinder-nas.rst +.. include:: ../tables/cinder-profiler.rst +.. include:: ../tables/cinder-pure.rst +.. include:: ../tables/cinder-quota.rst +.. include:: ../tables/cinder-redis.rst +.. include:: ../tables/cinder-san.rst +.. include:: ../tables/cinder-scheduler.rst +.. include:: ../tables/cinder-scst.rst +.. include:: ../tables/cinder-storage.rst +.. include:: ../tables/cinder-tegile.rst +.. include:: ../tables/cinder-zones.rst diff --git a/doc/source/config-reference/block-storage/drivers/blockbridge-eps-driver.rst b/doc/source/config-reference/block-storage/drivers/blockbridge-eps-driver.rst new file mode 100644 index 00000000000..399486cf768 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/blockbridge-eps-driver.rst @@ -0,0 +1,244 @@ +=============== +Blockbridge EPS +=============== + +Introduction +~~~~~~~~~~~~ + +Blockbridge is software that transforms commodity infrastructure into +secure multi-tenant storage that operates as a programmable service. It +provides automatic encryption, secure deletion, quality of service (QoS), +replication, and programmable security capabilities on your choice of +hardware. Blockbridge uses micro-segmentation to provide isolation that allows +you to concurrently operate OpenStack, Docker, and bare-metal workflows on +shared resources. When used with OpenStack, isolated management domains are +dynamically created on a per-project basis. All volumes and clones, within and +between projects, are automatically cryptographically isolated and implement +secure deletion. + +Architecture reference +~~~~~~~~~~~~~~~~~~~~~~ + +**Blockbridge architecture** + +.. figure:: ../../figures/bb-cinder-fig1.png + :width: 100% + + +Control paths +------------- + +The Blockbridge driver is packaged with the core distribution of +OpenStack. Operationally, it executes in the context of the Block +Storage service. The driver communicates with an OpenStack-specific API +provided by the Blockbridge EPS platform. Blockbridge optionally +communicates with Identity, Compute, and Block Storage +services. + +Block storage API +----------------- + +Blockbridge is API driven software-defined storage. The system +implements a native HTTP API that is tailored to the specific needs of +OpenStack. Each Block Storage service operation maps to a single +back-end API request that provides ACID semantics. The API is +specifically designed to reduce, if not eliminate, the possibility of +inconsistencies between the Block Storage service and external storage +infrastructure in the event of hardware, software or data center +failure. + +Extended management +------------------- + +OpenStack users may utilize Blockbridge interfaces to manage +replication, auditing, statistics, and performance information on a +per-project and per-volume basis. In addition, they can manage low-level +data security functions including verification of data authenticity and +encryption key delegation. Native integration with the Identity Service +allows tenants to use a single set of credentials. Integration with +Block storage and Compute services provides dynamic metadata mapping +when using Blockbridge management APIs and tools. + +Attribute-based provisioning +---------------------------- + +Blockbridge organizes resources using descriptive identifiers called +*attributes*. Attributes are assigned by administrators of the +infrastructure. They are used to describe the characteristics of storage +in an application-friendly way. Applications construct queries that +describe storage provisioning constraints and the Blockbridge storage +stack assembles the resources as described. + +Any given instance of a Blockbridge volume driver specifies a *query* +for resources. For example, a query could specify +``'+ssd +10.0.0.0 +6nines -production iops.reserve=1000 +capacity.reserve=30%'``. This query is satisfied by selecting SSD +resources, accessible on the 10.0.0.0 network, with high resiliency, for +non-production workloads, with guaranteed IOPS of 1000 and a storage +reservation for 30% of the volume capacity specified at create time. +Queries and parameters are completely administrator defined: they +reflect the layout, resource, and organizational goals of a specific +deployment. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, clone, attach, and detach volumes +- Create and delete volume snapshots +- Create a volume from a snapshot +- Copy an image to a volume +- Copy a volume to an image +- Extend a volume +- Get volume statistics + +Supported protocols +~~~~~~~~~~~~~~~~~~~ + +Blockbridge provides iSCSI access to storage. A unique iSCSI data fabric +is programmatically assembled when a volume is attached to an instance. +A fabric is disassembled when a volume is detached from an instance. +Each volume is an isolated SCSI device that supports persistent +reservations. + +Configuration steps +~~~~~~~~~~~~~~~~~~~ + +.. _cg_create_an_authentication_token: + +Create an authentication token +------------------------------ + +Whenever possible, avoid using password-based authentication. Even if +you have created a role-restricted administrative user via Blockbridge, +token-based authentication is preferred. You can generate persistent +authentication tokens using the Blockbridge command-line tool as +follows: + +.. code-block:: console + + $ bb -H bb-mn authorization create --notes "OpenStack" --restrict none + Authenticating to https://bb-mn/api + + Enter user or access token: system + Password for system: + Authenticated; token expires in 3599 seconds. + + == Authorization: ATH4762894C40626410 + notes OpenStack + serial ATH4762894C40626410 + account system (ACT0762594C40626440) + user system (USR1B62094C40626440) + enabled yes + created at 2015-10-24 22:08:48 +0000 + access type online + token suffix xaKUy3gw + restrict none + + == Access Token + access token 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw + + *** Remember to record your access token! + +Create volume type +------------------ + +Before configuring and enabling the Blockbridge volume driver, register +an OpenStack volume type and associate it with a +``volume_backend_name``. In this example, a volume type, 'Production', +is associated with the ``volume_backend_name`` 'blockbridge\_prod': + +.. code-block:: console + + $ openstack volume type create Production + $ openstack volume type set --property volume_backend_name=blockbridge_prod Production + +Specify volume driver +--------------------- + +Configure the Blockbridge volume driver in ``/etc/cinder/cinder.conf``. +Your ``volume_backend_name`` must match the value specified in the +:command:`openstack volume type set` command in the previous step. + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver + volume_backend_name = blockbridge_prod + +Specify API endpoint and authentication +--------------------------------------- + +Configure the API endpoint and authentication. The following example +uses an authentication token. You must create your own as described in +:ref:`cg_create_an_authentication_token`. + +.. code-block:: ini + + blockbridge_api_host = [ip or dns of management cluster] + blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw + +Specify resource query +---------------------- + +By default, a single pool is configured (implied) with a default +resource query of ``'+openstack'``. Within Blockbridge, datastore +resources that advertise the 'openstack' attribute will be selected to +fulfill OpenStack provisioning requests. If you prefer a more specific +query, define a custom pool configuration. + +.. code-block:: ini + + blockbridge_pools = Production: +production +qos iops.reserve=5000 + +Pools support storage systems that offer multiple classes of service. +You may wish to configure multiple pools to implement more sophisticated +scheduling capabilities. + +Configuration options +~~~~~~~~~~~~~~~~~~~~~ + +.. include:: ../../tables/cinder-blockbridge.rst + +.. _cg_configuration_example: + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +``cinder.conf`` example file + +.. code-block:: ini + + [Default] + enabled_backends = bb_devel bb_prod + + [bb_prod] + volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver + volume_backend_name = blockbridge_prod + blockbridge_api_host = [ip or dns of management cluster] + blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw + blockbridge_pools = Production: +production +qos iops.reserve=5000 + + [bb_devel] + volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver + volume_backend_name = blockbridge_devel + blockbridge_api_host = [ip or dns of management cluster] + blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw + blockbridge_pools = Development: +development + +Multiple volume types +~~~~~~~~~~~~~~~~~~~~~ + +Volume *types* are exposed to tenants, *pools* are not. To offer +multiple classes of storage to OpenStack tenants, you should define +multiple volume types. Simply repeat the process above for each desired +type. Be sure to specify a unique ``volume_backend_name`` and pool +configuration for each type. The +:ref:`cinder.conf ` example included with +this documentation illustrates configuration of multiple types. + +Testing resources +~~~~~~~~~~~~~~~~~ + +Blockbridge is freely available for testing purposes and deploys in +seconds as a Docker container. This is the same container used to run +continuous integration for OpenStack. For more information visit +`www.blockbridge.io `__. diff --git a/doc/source/config-reference/block-storage/drivers/ceph-rbd-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/ceph-rbd-volume-driver.rst new file mode 100644 index 00000000000..ef7517ead40 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/ceph-rbd-volume-driver.rst @@ -0,0 +1,109 @@ +============================= +Ceph RADOS Block Device (RBD) +============================= + +If you use KVM or QEMU as your hypervisor, you can configure the Compute +service to use `Ceph RADOS block devices +(RBD) `__ for volumes. + +Ceph is a massively scalable, open source, distributed storage system. +It is comprised of an object store, block store, and a POSIX-compliant +distributed file system. The platform can auto-scale to the exabyte +level and beyond. It runs on commodity hardware, is self-healing and +self-managing, and has no single point of failure. Ceph is in the Linux +kernel and is integrated with the OpenStack cloud operating system. Due +to its open-source nature, you can install and use this portable storage +platform in public or private clouds. + +.. figure:: ../../figures/ceph-architecture.png + + Ceph architecture + +RADOS +~~~~~ + +Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). +RADOS distributes objects across the storage cluster and replicates +objects for fault tolerance. RADOS contains the following major +components: + +*Object Storage Device (OSD) Daemon* + The storage daemon for the RADOS service, which interacts with the + OSD (physical or logical storage unit for your data). + You must run this daemon on each server in your cluster. For each + OSD, you can have an associated hard drive disk. For performance + purposes, pool your hard drive disk with raid arrays, logical volume + management (LVM), or B-tree file system (Btrfs) pooling. By default, + the following pools are created: data, metadata, and RBD. + +*Meta-Data Server (MDS)* + Stores metadata. MDSs build a POSIX file + system on top of objects for Ceph clients. However, if you do not use + the Ceph file system, you do not need a metadata server. + +*Monitor (MON)* + A lightweight daemon that handles all communications + with external applications and clients. It also provides a consensus + for distributed decision making in a Ceph/RADOS cluster. For + instance, when you mount a Ceph shared on a client, you point to the + address of a MON server. It checks the state and the consistency of + the data. In an ideal setup, you must run at least three ``ceph-mon`` + daemons on separate servers. + +Ceph developers recommend XFS for production deployments, Btrfs for +testing, development, and any non-critical deployments. Btrfs has the +correct feature set and roadmap to serve Ceph in the long-term, but XFS +and ext4 provide the necessary stability for today’s deployments. + +.. note:: + + If using Btrfs, ensure that you use the correct version (see `Ceph + Dependencies `__). + + For more information about usable file systems, see + `ceph.com/ceph-storage/file-system/ `__. + +Ways to store, use, and expose data +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To store and access your data, you can use the following storage +systems: + +*RADOS* + Use as an object, default storage mechanism. + +*RBD* + Use as a block device. The Linux kernel RBD (RADOS block + device) driver allows striping a Linux block device over multiple + distributed object store data objects. It is compatible with the KVM + RBD image. + +*CephFS* + Use as a file, POSIX-compliant file system. + +Ceph exposes RADOS; you can access it through the following interfaces: + +*RADOS Gateway* + OpenStack Object Storage and Amazon-S3 compatible + RESTful interface (see `RADOS_Gateway + `__). + +*librados* + and its related C/C++ bindings + +*RBD and QEMU-RBD* + Linux kernel and QEMU block devices that stripe + data across multiple objects. + +Driver options +~~~~~~~~~~~~~~ + +The following table contains the configuration options supported by the +Ceph RADOS Block Device driver. + +.. note:: + + The ``volume_tmp_dir`` option has been deprecated and replaced by + ``image_conversion_dir``. + +.. include:: ../../tables/cinder-storage_ceph.rst diff --git a/doc/source/config-reference/block-storage/drivers/cloudbyte-driver.rst b/doc/source/config-reference/block-storage/drivers/cloudbyte-driver.rst new file mode 100644 index 00000000000..da453a085e4 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/cloudbyte-driver.rst @@ -0,0 +1,8 @@ +======================= +CloudByte volume driver +======================= + +CloudByte Block Storage driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: ../../tables/cinder-cloudbyte.rst diff --git a/doc/source/config-reference/block-storage/drivers/coho-data-driver.rst b/doc/source/config-reference/block-storage/drivers/coho-data-driver.rst new file mode 100644 index 00000000000..4fad81f6cab --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/coho-data-driver.rst @@ -0,0 +1,93 @@ +======================= +Coho Data volume driver +======================= + +The Coho DataStream Scale-Out Storage allows your Block Storage service to +scale seamlessly. The architecture consists of commodity storage servers +with SDN ToR switches. Leveraging an SDN OpenFlow controller allows you +to scale storage horizontally, while avoiding storage and network bottlenecks +by intelligent load-balancing and parallelized workloads. High-performance +PCIe NVMe flash, paired with traditional hard disk drives (HDD) or solid-state +drives (SSD), delivers low-latency performance even with highly mixed workloads +in large scale environment. + +Coho Data's storage features include real-time instance level +granularity performance and capacity reporting via API or UI, and +single-IP storage endpoint access. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, detach, retype, clone, and extend volumes. +* Create, list, and delete volume snapshots. +* Create a volume from a snapshot. +* Copy a volume to an image. +* Copy an image to a volume. +* Create a thin provisioned volume. +* Get volume statistics. + +Coho Data QoS support +~~~~~~~~~~~~~~~~~~~~~ + +QoS support for the Coho Data driver includes the ability to set the +following capabilities in the OpenStack Block Storage API +``cinder.api.contrib.qos_specs_manage`` QoS specs extension module: + +* **maxIOPS** - The maximum number of IOPS allowed for this volume. + +* **maxMBS** - The maximum throughput allowed for this volume. + +The QoS keys above must be created and associated with a volume type. +For information about how to set the key-value pairs and associate +them with a volume type, see the `volume qos +`_ +section in the OpenStackClient command list. + +.. note:: + + If you change a volume type with QoS to a new volume type + without QoS, the QoS configuration settings will be removed. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +* NFS client on the Block storage controller. + +Coho Data Block Storage driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. Create cinder volume type. + + .. code-block:: console + + $ openstack volume type create coho-1 + +#. Edit the OpenStack Block Storage service configuration file. + The following sample, ``/etc/cinder/cinder.conf``, configuration lists the + relevant settings for a typical Block Storage service using a single + Coho Data storage: + + .. code-block:: ini + + [DEFAULT] + enabled_backends = coho-1 + default_volume_type = coho-1 + + [coho-1] + volume_driver = cinder.volume.drivers.coho.CohoDriver + volume_backend_name = coho-1 + nfs_shares_config = /etc/cinder/coho_shares + nas_secure_file_operations = 'false' + +#. Add your list of Coho Datastream NFS addresses to the file you specified + with the ``nfs_shares_config`` option. For example, if the value of this + option was set to ``/etc/cinder/coho_shares``, then: + + .. code-block:: console + + $ cat /etc/cinder/coho_shares + :/ + +#. Restart the ``cinder-volume`` service to enable Coho Data driver. + +.. include:: ../../tables/cinder-coho.rst diff --git a/doc/source/config-reference/block-storage/drivers/coprhd-driver.rst b/doc/source/config-reference/block-storage/drivers/coprhd-driver.rst new file mode 100644 index 00000000000..a1e7ad78854 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/coprhd-driver.rst @@ -0,0 +1,318 @@ +===================================== +CoprHD FC, iSCSI, and ScaleIO drivers +===================================== + +CoprHD is an open source software-defined storage controller and API platform. +It enables policy-based management and cloud automation of storage resources +for block, object and file storage providers. +For more details, see `CoprHD `_. + +EMC ViPR Controller is the commercial offering of CoprHD. These same volume +drivers can also be considered as EMC ViPR Controller Block Storage drivers. + + +System requirements +~~~~~~~~~~~~~~~~~~~ + +CoprHD version 3.0 is required. Refer to the CoprHD documentation for +installation and configuration instructions. + +If you are using these drivers to integrate with EMC ViPR Controller, use +EMC ViPR Controller 3.0. + + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The following operations are supported: + +- Create, delete, attach, detach, retype, clone, and extend volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy a volume to an image. +- Copy an image to a volume. +- Clone a volume. +- Extend a volume. +- Retype a volume. +- Get volume statistics. +- Create, delete, and update consistency groups. +- Create and delete consistency group snapshots. + + +Driver options +~~~~~~~~~~~~~~ + +The following table contains the configuration options specific to the +CoprHD volume driver. + +.. include:: ../../tables/cinder-coprhd.rst + + +Preparation +~~~~~~~~~~~ + +This involves setting up the CoprHD environment first and then configuring +the CoprHD Block Storage driver. + +CoprHD +------ + +The CoprHD environment must meet specific configuration requirements to +support the OpenStack Block Storage driver. + +- CoprHD users must be assigned a Tenant Administrator role or a Project + Administrator role for the Project being used. CoprHD roles are configured + by CoprHD Security Administrators. Consult the CoprHD documentation for + details. + +- A CorprHD system administrator must execute the following configurations + using the CoprHD UI, CoprHD API, or CoprHD CLI: + + - Create CoprHD virtual array + - Create CoprHD virtual storage pool + - Virtual Array designated for iSCSI driver must have an IP network created + with appropriate IP storage ports + - Designated tenant for use + - Designated project for use + +.. note:: Use each back end to manage one virtual array and one virtual + storage pool. However, the user can have multiple instances of + CoprHD Block Storage driver, sharing the same virtual array and virtual + storage pool. + +- A typical CoprHD virtual storage pool will have the following values + specified: + + - Storage Type: Block + - Provisioning Type: Thin + - Protocol: iSCSI/Fibre Channel(FC)/ScaleIO + - Multi-Volume Consistency: DISABLED OR ENABLED + - Maximum Native Snapshots: A value greater than 0 allows the OpenStack user + to take Snapshots + + +CoprHD drivers - Single back end +-------------------------------- + +**cinder.conf** + +#. Modify ``/etc/cinder/cinder.conf`` by adding the following lines, + substituting values for your environment: + + .. code-block:: ini + + [coprhd-iscsi] + volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver + volume_backend_name = coprhd-iscsi + coprhd_hostname = + coprhd_port = 4443 + coprhd_username = + coprhd_password = + coprhd_tenant = + coprhd_project = + coprhd_varray = + coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage + +#. If you use the ScaleIO back end, add the following lines: + + .. code-block:: ini + + coprhd_scaleio_rest_gateway_host = + coprhd_scaleio_rest_gateway_port = 443 + coprhd_scaleio_rest_server_username = + coprhd_scaleio_rest_server_password = + scaleio_verify_server_certificate = True or False + scaleio_server_certificate_path = + +#. Specify the driver using the ``enabled_backends`` parameter:: + + enabled_backends = coprhd-iscsi + + .. note:: To utilize the Fibre Channel driver, replace the + ``volume_driver`` line above with:: + + volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver + + .. note:: To utilize the ScaleIO driver, replace the ``volume_driver`` line + above with:: + + volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver + + .. note:: Set ``coprhd_emulate_snapshot`` to True if the CoprHD vpool has + VMAX or VPLEX as the back-end storage. For these type of back-end + storages, when a user tries to create a snapshot, an actual volume + gets created in the back end. + +#. Modify the ``rpc_response_timeout`` value in ``/etc/cinder/cinder.conf`` to + at least 5 minutes. If this entry does not already exist within the + ``cinder.conf`` file, add it in the ``[DEFAULT]`` section: + + .. code-block:: ini + + [DEFAULT] + # ... + rpc_response_timeout = 300 + +#. Now, restart the ``cinder-volume`` service. + +**Volume type creation and extra specs** + +#. Create OpenStack volume types: + + .. code-block:: console + + $ openstack volume type create + +#. Map the OpenStack volume type to the CoprHD virtual pool: + + .. code-block:: console + + $ openstack volume type set --property CoprHD:VPOOL= + +#. Map the volume type created to appropriate back-end driver: + + .. code-block:: console + + $ openstack volume type set --property volume_backend_name= + + +CoprHD drivers - Multiple back-ends +----------------------------------- + +**cinder.conf** + +#. Add or modify the following entries if you are planning to use multiple + back-end drivers: + + .. code-block:: ini + + enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio + +#. Add the following at the end of the file: + + .. code-block:: ini + + [coprhddriver-iscsi] + volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver + volume_backend_name = EMCCoprHDISCSIDriver + coprhd_hostname = + coprhd_port = 4443 + coprhd_username = + coprhd_password = + coprhd_tenant = + coprhd_project = + coprhd_varray = + + + [coprhddriver-fc] + volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver + volume_backend_name = EMCCoprHDFCDriver + coprhd_hostname = + coprhd_port = 4443 + coprhd_username = + coprhd_password = + coprhd_tenant = + coprhd_project = + coprhd_varray = + + + [coprhddriver-scaleio] + volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver + volume_backend_name = EMCCoprHDScaleIODriver + coprhd_hostname = + coprhd_port = 4443 + coprhd_username = + coprhd_password = + coprhd_tenant = + coprhd_project = + coprhd_varray = + coprhd_scaleio_rest_gateway_host = + coprhd_scaleio_rest_gateway_port = 443 + coprhd_scaleio_rest_server_username = + coprhd_scaleio_rest_server_password = + scaleio_verify_server_certificate = True or False + scaleio_server_certificate_path = + + +#. Restart the ``cinder-volume`` service. + + +**Volume type creation and extra specs** + +Setup the ``volume-types`` and ``volume-type`` to ``volume-backend`` +association: + +.. code-block:: console + + $ openstack volume type create "CoprHD High Performance ISCSI" + $ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI" + $ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver + + $ openstack volume type create "CoprHD High Performance FC" + $ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC" + $ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver + + $ openstack volume type create "CoprHD performance SIO" + $ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf" + $ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver + + +ISCSI driver notes +~~~~~~~~~~~~~~~~~~ + +* The compute host must be added to the CoprHD along with its ISCSI + initiator. +* The ISCSI initiator must be associated with IP network on the CoprHD. + + +FC driver notes +~~~~~~~~~~~~~~~ + +* The compute host must be attached to a VSAN or fabric discovered + by CoprHD. +* There is no need to perform any SAN zoning operations. CoprHD will perform + the necessary operations automatically as part of the provisioning process. + + +ScaleIO driver notes +~~~~~~~~~~~~~~~~~~~~ + +* Install the ScaleIO SDC on the compute host. +* The compute host must be added as the SDC to the ScaleIO MDS + using the below commands:: + + /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs + (starting with primary MDM and separated by comma) + Example: + /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip + 10.247.78.45,10.247.78.46,10.247.78.47 + +This step has to be repeated whenever the SDC (compute host in this case) +is rebooted. + + +Consistency group configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To enable the support of consistency group and consistency group snapshot +operations, use a text editor to edit the file ``/etc/cinder/policy.json`` and +change the values of the below fields as specified. Upon editing the file, +restart the ``c-api`` service:: + + "consistencygroup:create" : "", + "consistencygroup:delete": "", + "consistencygroup:get": "", + "consistencygroup:get_all": "", + "consistencygroup:update": "", + "consistencygroup:create_cgsnapshot" : "", + "consistencygroup:delete_cgsnapshot": "", + "consistencygroup:get_cgsnapshot": "", + "consistencygroup:get_all_cgsnapshots": "", + + +Names of resources in back-end storage +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +All the resources like volume, consistency group, snapshot, and consistency +group snapshot will use the display name in OpenStack for naming in the +back-end storage. diff --git a/doc/source/config-reference/block-storage/drivers/datera-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/datera-volume-driver.rst new file mode 100644 index 00000000000..b32eabd3bf0 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/datera-volume-driver.rst @@ -0,0 +1,170 @@ +============== +Datera drivers +============== + +Datera iSCSI driver +------------------- + +The Datera Elastic Data Fabric (EDF) is a scale-out storage software that +turns standard, commodity hardware into a RESTful API-driven, intent-based +policy controlled storage fabric for large-scale clouds. The Datera EDF +integrates seamlessly with the Block Storage service. It provides storage +through the iSCSI block protocol framework over the iSCSI block protocol. +Datera supports all of the Block Storage services. + +System requirements, prerequisites, and recommendations +------------------------------------------------------- + +Prerequisites +~~~~~~~~~~~~~ + +* Must be running compatible versions of OpenStack and Datera EDF. + Please visit `here `_ to determine the + correct version. + +* All nodes must have access to Datera EDF through the iSCSI block protocol. + +* All nodes accessing the Datera EDF must have the following packages + installed: + + * Linux I/O (LIO) + * open-iscsi + * open-iscsi-utils + * wget + +.. include:: ../../tables/cinder-datera.rst + + + +Configuring the Datera volume driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Modify the ``/etc/cinder/cinder.conf`` file for Block Storage service. + +* Enable the Datera volume driver: + +.. code-block:: ini + + [DEFAULT] + # ... + enabled_backends = datera + # ... + +* Optional. Designate Datera as the default back-end: + +.. code-block:: ini + + default_volume_type = datera + +* Create a new section for the Datera back-end definition. The ``san_ip`` can + be either the Datera Management Network VIP or one of the Datera iSCSI + Access Network VIPs depending on the network segregation requirements: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.datera.DateraDriver + san_ip = # The OOB Management IP of the cluster + san_login = admin # Your cluster admin login + san_password = password # Your cluster admin password + san_is_local = true + datera_num_replicas = 3 # Number of replicas to use for volume + +Enable the Datera volume driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Verify the OpenStack control node can reach the Datera ``san_ip``: + +.. code-block:: bash + + $ ping -c 4 + +* Start the Block Storage service on all nodes running the ``cinder-volume`` + services: + +.. code-block:: bash + + $ service cinder-volume restart + +QoS support for the Datera drivers includes the ability to set the +following capabilities in QoS Specs + +* **read_iops_max** -- must be positive integer + +* **write_iops_max** -- must be positive integer + +* **total_iops_max** -- must be positive integer + +* **read_bandwidth_max** -- in KB per second, must be positive integer + +* **write_bandwidth_max** -- in KB per second, must be positive integer + +* **total_bandwidth_max** -- in KB per second, must be positive integer + +.. code-block:: bash + + # Create qos spec + $ openstack volume qos create --property total_iops_max=1000 total_bandwidth_max=2000 DateraBronze + + # Associate qos-spec with volume type + $ openstack volume qos associate DateraBronze VOLUME_TYPE + + # Add additional qos values or update existing ones + $ openstack volume qos set --property read_bandwidth_max=500 DateraBronze + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, detach, manage, unmanage, and list volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +* Support for naming convention changes. + +Configuring multipathing +~~~~~~~~~~~~~~~~~~~~~~~~ + +The following configuration is for 3.X Linux kernels, some parameters in +different Linux distributions may be different. Make the following changes +in the ``multipath.conf`` file: + +.. code-block:: text + + defaults { + checker_timer 5 + } + devices { + device { + vendor "DATERA" + product "IBLOCK" + getuid_callout "/lib/udev/scsi_id --whitelisted – + replace-whitespace --page=0x80 --device=/dev/%n" + path_grouping_policy group_by_prio + path_checker tur + prio alua + path_selector "queue-length 0" + hardware_handler "1 alua" + failback 5 + } + } + blacklist { + device { + vendor ".*" + product ".*" + } + } + blacklist_exceptions { + device { + vendor "DATERA.*" + product "IBLOCK.*" + } + } + diff --git a/doc/source/config-reference/block-storage/drivers/dell-emc-scaleio-driver.rst b/doc/source/config-reference/block-storage/drivers/dell-emc-scaleio-driver.rst new file mode 100644 index 00000000000..1e4cf68407d --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/dell-emc-scaleio-driver.rst @@ -0,0 +1,319 @@ +===================================== +Dell EMC ScaleIO Block Storage driver +===================================== + +ScaleIO is a software-only solution that uses existing servers' local +disks and LAN to create a virtual SAN that has all of the benefits of +external storage, but at a fraction of the cost and complexity. Using the +driver, Block Storage hosts can connect to a ScaleIO Storage +cluster. + +This section explains how to configure and connect the block storage +nodes to a ScaleIO storage cluster. + +Support matrix +~~~~~~~~~~~~~~ + +.. list-table:: + :widths: 10 25 + :header-rows: 1 + + * - ScaleIO version + - Supported Linux operating systems + * - 2.0 + - CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12, Ubuntu 14.04, Ubuntu 16.04 + +Deployment prerequisites +~~~~~~~~~~~~~~~~~~~~~~~~ + +* ScaleIO Gateway must be installed and accessible in the network. + For installation steps, refer to the Preparing the installation Manager + and the Gateway section in ScaleIO Deployment Guide. See + :ref:`scale_io_docs`. + +* ScaleIO Data Client (SDC) must be installed on all OpenStack nodes. + +.. note:: Ubuntu users must follow the specific instructions in the ScaleIO + deployment guide for Ubuntu environments. See the Deploying on + Ubuntu servers section in ScaleIO Deployment Guide. See + :ref:`scale_io_docs`. + +.. _scale_io_docs: + +Official documentation +---------------------- + +To find the ScaleIO documentation: + +#. Go to the `ScaleIO product documentation page `_. + +#. From the left-side panel, select the relevant version. + +#. Search for "ScaleIO 2.0 Deployment Guide". + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, clone, attach, detach, manage, and unmanage volumes + +* Create, delete, manage, and unmanage volume snapshots + +* Create a volume from a snapshot + +* Copy an image to a volume + +* Copy a volume to an image + +* Extend a volume + +* Get volume statistics + +* Create, list, update, and delete consistency groups + +* Create, list, update, and delete consistency group snapshots + +ScaleIO QoS support +~~~~~~~~~~~~~~~~~~~~ + +QoS support for the ScaleIO driver includes the ability to set the +following capabilities in the Block Storage API +``cinder.api.contrib.qos_specs_manage`` QoS specs extension module: + +* ``maxIOPS`` + +* ``maxIOPSperGB`` + +* ``maxBWS`` + +* ``maxBWSperGB`` + +The QoS keys above must be created and associated with a volume type. +For information about how to set the key-value pairs and associate +them with a volume type, run the following commands: + +.. code-block:: console + + $ openstack help volume qos + +``maxIOPS`` + The QoS I/O rate limit. If not set, the I/O rate will be unlimited. + The setting must be larger than 10. + +``maxIOPSperGB`` + The QoS I/O rate limit. + The limit will be calculated by the specified value multiplied by + the volume size. + The setting must be larger than 10. + +``maxBWS`` + The QoS I/O bandwidth rate limit in KBs. If not set, the I/O + bandwidth rate will be unlimited. The setting must be a multiple of 1024. + +``maxBWSperGB`` + The QoS I/O bandwidth rate limit in KBs. + The limit will be calculated by the specified value multiplied by + the volume size. + The setting must be a multiple of 1024. + +The driver always chooses the minimum between the QoS keys value +and the relevant calculated value of ``maxIOPSperGB`` or ``maxBWSperGB``. + +Since the limits are per SDC, they will be applied after the volume +is attached to an instance, and thus to a compute node/SDC. + +ScaleIO thin provisioning support +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Block Storage driver supports creation of thin-provisioned and +thick-provisioned volumes. +The provisioning type settings can be added as an extra specification +of the volume type, as follows: + +.. code-block:: ini + + provisioning:type = thin\thick + +The old specification: ``sio:provisioning_type`` is deprecated. + +Oversubscription +---------------- + +Configure the oversubscription ratio by adding the following parameter +under the separate section for ScaleIO: + +.. code-block:: ini + + sio_max_over_subscription_ratio = OVER_SUBSCRIPTION_RATIO + +.. note:: + + The default value for ``sio_max_over_subscription_ratio`` + is 10.0. + +Oversubscription is calculated correctly by the Block Storage service +only if the extra specification ``provisioning:type`` +appears in the volume type regardless to the default provisioning type. +Maximum oversubscription value supported for ScaleIO is 10.0. + +Default provisioning type +------------------------- + +If provisioning type settings are not specified in the volume type, +the default value is set according to the ``san_thin_provision`` +option in the configuration file. The default provisioning type +will be ``thin`` if the option is not specified in the configuration +file. To set the default provisioning type ``thick``, set +the ``san_thin_provision`` option to ``false`` +in the configuration file, as follows: + +.. code-block:: ini + + san_thin_provision = false + +The configuration file is usually located in +``/etc/cinder/cinder.conf``. +For a configuration example, see: +:ref:`cinder.conf `. + +ScaleIO Block Storage driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Edit the ``cinder.conf`` file by adding the configuration below under +a new section (for example, ``[scaleio]``) and change the ``enable_backends`` +setting (in the ``[DEFAULT]`` section) to include this new back end. +The configuration file is usually located at +``/etc/cinder/cinder.conf``. + +For a configuration example, refer to the example +:ref:`cinder.conf ` . + +ScaleIO driver name +------------------- + +Configure the driver name by adding the following parameter: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver + +ScaleIO MDM server IP +--------------------- + +The ScaleIO Meta Data Manager monitors and maintains the available +resources and permissions. + +To retrieve the MDM server IP address, use the :command:`drv_cfg --query_mdms` +command. + +Configure the MDM server IP address by adding the following parameter: + +.. code-block:: ini + + san_ip = ScaleIO GATEWAY IP + +ScaleIO Protection Domain name +------------------------------ + +ScaleIO allows multiple Protection Domains (groups of SDSs that provide +backup for each other). + +To retrieve the available Protection Domains, use the command +:command:`scli --query_all` and search for the Protection +Domains section. + +Configure the Protection Domain for newly created volumes by adding the +following parameter: + +.. code-block:: ini + + sio_protection_domain_name = ScaleIO Protection Domain + +ScaleIO Storage Pool name +------------------------- + +A ScaleIO Storage Pool is a set of physical devices in a Protection +Domain. + +To retrieve the available Storage Pools, use the command +:command:`scli --query_all` and search for available Storage Pools. + +Configure the Storage Pool for newly created volumes by adding the +following parameter: + +.. code-block:: ini + + sio_storage_pool_name = ScaleIO Storage Pool + +ScaleIO Storage Pools +--------------------- + +Multiple Storage Pools and Protection Domains can be listed for use by +the virtual machines. + +To retrieve the available Storage Pools, use the command +:command:`scli --query_all` and search for available Storage Pools. + +Configure the available Storage Pools by adding the following parameter: + +.. code-block:: ini + + sio_storage_pools = Comma-separated list of protection domain:storage pool name + +ScaleIO user credentials +------------------------ + +Block Storage requires a ScaleIO user with administrative +privileges. ScaleIO recommends creating a dedicated OpenStack user +account that has an administrative user role. + +Refer to the ScaleIO User Guide for details on user account management. + +Configure the user credentials by adding the following parameters: + +.. code-block:: ini + + san_login = ScaleIO username + + san_password = ScaleIO password + +Multiple back ends +~~~~~~~~~~~~~~~~~~ + +Configuring multiple storage back ends allows you to create several back-end +storage solutions that serve the same Compute resources. + +When a volume is created, the scheduler selects the appropriate back end +to handle the request, according to the specified volume type. + +.. _cg_configuration_example_emc: + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +**cinder.conf example file** + +You can update the ``cinder.conf`` file by editing the necessary +parameters as follows: + +.. code-block:: ini + + [Default] + enabled_backends = scaleio + + [scaleio] + volume_driver = cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver + volume_backend_name = scaleio + san_ip = GATEWAY_IP + sio_protection_domain_name = Default_domain + sio_storage_pool_name = Default_pool + sio_storage_pools = Domain1:Pool1,Domain2:Pool2 + san_login = SIO_USER + san_password = SIO_PASSWD + san_thin_provision = false + +Configuration options +~~~~~~~~~~~~~~~~~~~~~ + +The ScaleIO driver supports these configuration options: + +.. include:: ../../tables/cinder-emc_sio.rst diff --git a/doc/source/config-reference/block-storage/drivers/dell-emc-unity-driver.rst b/doc/source/config-reference/block-storage/drivers/dell-emc-unity-driver.rst new file mode 100644 index 00000000000..fc1d02c0d02 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/dell-emc-unity-driver.rst @@ -0,0 +1,339 @@ +===================== +Dell EMC Unity driver +===================== + +Unity driver has been integrated in the OpenStack Block Storage project since +the Ocata release. The driver is built on the top of Block Storage framework +and a Dell EMC distributed Python package +`storops `_. + +Prerequisites +~~~~~~~~~~~~~ + ++-------------------+----------------+ +| Software | Version | ++===================+================+ +| Unity OE | 4.1.X | ++-------------------+----------------+ +| OpenStack | Ocata | ++-------------------+----------------+ +| storops | 0.4.2 or newer | ++-------------------+----------------+ + + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Create an image from a volume. +- Clone a volume. +- Extend a volume. +- Migrate a volume. +- Get volume statistics. +- Efficient non-disruptive volume backup. + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +.. note:: The following instructions should all be performed on Black Storage + nodes. + +#. Install `storops` from pypi: + + .. code-block:: console + + # pip install storops + + +#. Add the following content into ``/etc/cinder/cinder.conf``: + + .. code-block:: ini + + [DEFAULT] + enabled_backends = unity + + [unity] + # Storage protocol + storage_protocol = iSCSI + # Unisphere IP + san_ip = + # Unisphere username and password + san_login = + san_password = + # Volume driver name + volume_driver = cinder.volume.drivers.dell_emc.unity.Driver + # backend's name + volume_backend_name = Storage_ISCSI_01 + + .. note:: These are minimal options for Unity driver, for more options, + see `Driver options`_. + + +.. note:: (**Optional**) If you require multipath based data access, perform + below steps on both Block Storage and Compute nodes. + + +#. Install ``sysfsutils``, ``sg3-utils`` and ``multipath-tools``: + + .. code-block:: console + + # apt-get install multipath-tools sg3-utils sysfsutils + + +#. (Required for FC driver in case `Auto-zoning Support`_ is disabled) Zone the + FC ports of Compute nodes with Unity FC target ports. + + +#. Enable Unity storage optimized multipath configuration: + + Add the following content into ``/etc/multipath.conf`` + + .. code-block:: vim + + blacklist { + # Skip the files uner /dev that are definitely not FC/iSCSI devices + # Different system may need different customization + devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" + devnode "^hd[a-z][0-9]*" + devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" + + # Skip LUNZ device from VNX/Unity + device { + vendor "DGC" + product "LUNZ" + } + } + + defaults { + user_friendly_names no + flush_on_last_del yes + } + + devices { + # Device attributed for EMC CLARiiON and VNX/Unity series ALUA + device { + vendor "DGC" + product ".*" + product_blacklist "LUNZ" + path_grouping_policy group_by_prio + path_selector "round-robin 0" + path_checker emc_clariion + features "0" + no_path_retry 12 + hardware_handler "1 alua" + prio alua + failback immediate + } + } + + +#. Restart the multipath service: + + .. code-block:: console + + # service multipath-tools restart + + +#. Enable multipath for image transfer in ``/etc/cinder/cinder.conf``. + + .. code-block:: ini + + use_multipath_for_image_xfer = True + + Restart the ``cinder-volume`` service to load the change. + +#. Enable multipath for volume attache/detach in ``/etc/nova/nova.conf``. + + .. code-block:: ini + + [libvirt] + ... + volume_use_multipath = True + ... + +#. Restart the ``nova-compute`` service. + +Driver options +~~~~~~~~~~~~~~ + +.. include:: ../../tables/cinder-dell_emc_unity.rst + +FC or iSCSI ports option +------------------------ + +Specify the list of FC or iSCSI ports to be used to perform the IO. Wild card +character is supported. +For iSCSI ports, use the following format: + +.. code-block:: ini + + unity_io_ports = spa_eth2, spb_eth2, *_eth3 + +For FC ports, use the following format: + +.. code-block:: ini + + unity_io_ports = spa_iom_0_fc0, spb_iom_0_fc0, *_iom_0_fc1 + +List the port ID with the :command:`uemcli` command: + +.. code-block:: console + + $ uemcli /net/port/eth show -output csv + ... + "spa_eth2","SP A Ethernet Port 2","spa","file, net, iscsi", ... + "spb_eth2","SP B Ethernet Port 2","spb","file, net, iscsi", ... + ... + + $ uemcli /net/port/fc show -output csv + ... + "spa_iom_0_fc0","SP A I/O Module 0 FC Port 0","spa", ... + "spb_iom_0_fc0","SP B I/O Module 0 FC Port 0","spb", ... + ... + +Live migration integration +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is suggested to have multipath configured on Compute nodes for robust data +access in VM instances live migration scenario. Once ``user_friendly_names no`` +is set in defaults section of ``/etc/multipath.conf``, Compute nodes will use +the WWID as the alias for the multipath devices. + +To enable multipath in live migration: + +.. note:: Make sure `Driver configuration`_ steps are performed before + following steps. + +#. Set multipath in ``/etc/nova/nova.conf``: + + .. code-block:: ini + + [libvirt] + ... + volume_use_multipath = True + ... + + Restart `nova-compute` service. + + +#. Set ``user_friendly_names no`` in ``/etc/multipath.conf`` + + .. code-block:: text + + ... + defaults { + user_friendly_names no + } + ... + +#. Restart the ``multipath-tools`` service. + + +Thin and thick provisioning +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Only thin volume provisioning is supported in Unity volume driver. + + +QoS support +~~~~~~~~~~~ + +Unity driver supports ``maxBWS`` and ``maxIOPS`` specs for the back-end +consumer type. ``maxBWS`` represents the ``Maximum IO/S`` absolute limit, +``maxIOPS`` represents the ``Maximum Bandwidth (KBPS)`` absolute limit on the +Unity respectively. + + +Auto-zoning support +~~~~~~~~~~~~~~~~~~~ + +Unity volume driver supports auto-zoning, and share the same configuration +guide for other vendors. Refer to :ref:`fc_zone_manager` +for detailed configuration steps. + +Solution for LUNZ device +~~~~~~~~~~~~~~~~~~~~~~~~ + +The EMC host team also found LUNZ on all of the hosts, EMC best practice is to +present a LUN with HLU 0 to clear any LUNZ devices as they can cause issues on +the host. See KB `LUNZ Device `_. + +To workaround this issue, Unity driver creates a `Dummy LUN` (if not present), +and adds it to each host to occupy the `HLU 0` during volume attachment. + +.. note:: This `Dummy LUN` is shared among all hosts connected to the Unity. + +Efficient non-disruptive volume backup +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The default implementation in Block Storage for non-disruptive volume backup is +not efficient since a cloned volume will be created during backup. + +An effective approach to backups is to create a snapshot for the volume and +connect this snapshot to the Block Storage host for volume backup. + +Troubleshooting +~~~~~~~~~~~~~~~ + +To troubleshoot a failure in OpenStack deployment, the best way is to +enable verbose and debug log, at the same time, leverage the build-in +`Return request ID to caller +`_ +to track specific Block Storage command logs. + + +#. Enable verbose log, set following in ``/etc/cinder/cinder.conf`` and restart + all Block Storage services: + + .. code-block:: ini + + [DEFAULT] + + ... + + debug = True + verbose = True + + ... + + + If other projects (usually Compute) are also involved, set `debug` + and ``verbose`` to ``True``. + +#. use ``--debug`` to trigger any problematic Block Storage operation: + + .. code-block:: console + + # cinder --debug create --name unity_vol1 100 + + + You will see the request ID from the console, for example: + + .. code-block:: console + + DEBUG:keystoneauth:REQ: curl -g -i -X POST + http://192.168.1.9:8776/v2/e50d22bdb5a34078a8bfe7be89324078/volumes -H + "User-Agent: python-cinderclient" -H "Content-Type: application/json" -H + "Accept: application/json" -H "X-Auth-Token: + {SHA1}bf4a85ad64302b67a39ad7c6f695a9630f39ab0e" -d '{"volume": {"status": + "creating", "user_id": null, "name": "unity_vol1", "imageRef": null, + "availability_zone": null, "description": null, "multiattach": false, + "attach_status": "detached", "volume_type": null, "metadata": {}, + "consistencygroup_id": null, "source_volid": null, "snapshot_id": null, + "project_id": null, "source_replica": null, "size": 10}}' + DEBUG:keystoneauth:RESP: [202] X-Compute-Request-Id: + req-3a459e0e-871a-49f9-9796-b63cc48b5015 Content-Type: application/json + Content-Length: 804 X-Openstack-Request-Id: + req-3a459e0e-871a-49f9-9796-b63cc48b5015 Date: Mon, 12 Dec 2016 09:31:44 GMT + Connection: keep-alive + +#. Use commands like ``grep``, ``awk`` to find the error related to the Block + Storage operations. + + .. code-block:: console + + # grep "req-3a459e0e-871a-49f9-9796-b63cc48b5015" cinder-volume.log + diff --git a/doc/source/config-reference/block-storage/drivers/dell-equallogic-driver.rst b/doc/source/config-reference/block-storage/drivers/dell-equallogic-driver.rst new file mode 100644 index 00000000000..15167852cee --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/dell-equallogic-driver.rst @@ -0,0 +1,160 @@ +============================= +Dell EqualLogic volume driver +============================= + +The Dell EqualLogic volume driver interacts with configured EqualLogic +arrays and supports various operations. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Clone a volume. + +Configuration +~~~~~~~~~~~~~ + +The OpenStack Block Storage service supports: + +- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group + Storage Pools and multiple pools on a single array. + +- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group + Storage Pools or multiple pools on a single array. + +The Dell EqualLogic volume driver's ability to access the EqualLogic Group is +dependent upon the generic block storage driver's SSH settings in the +``/etc/cinder/cinder.conf`` file (see +:ref:`block-storage-sample-configuration-file` for reference). + +.. include:: ../../tables/cinder-eqlx.rst + +Default (single-instance) configuration +--------------------------------------- + +The following sample ``/etc/cinder/cinder.conf`` configuration lists the +relevant settings for a typical Block Storage service using a single +Dell EqualLogic Group: + +.. code-block:: ini + + [DEFAULT] + # Required settings + + volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver + san_ip = IP_EQLX + san_login = SAN_UNAME + san_password = SAN_PW + eqlx_group_name = EQLX_GROUP + eqlx_pool = EQLX_POOL + + # Optional settings + + san_thin_provision = true|false + use_chap_auth = true|false + chap_username = EQLX_UNAME + chap_password = EQLX_PW + eqlx_cli_max_retries = 5 + san_ssh_port = 22 + ssh_conn_timeout = 30 + san_private_key = SAN_KEY_PATH + ssh_min_pool_conn = 1 + ssh_max_pool_conn = 5 + +In this example, replace the following variables accordingly: + +IP_EQLX + The IP address used to reach the Dell EqualLogic Group through SSH. + This field has no default value. + +SAN_UNAME + The user name to login to the Group manager via SSH at the + ``san_ip``. Default user name is ``grpadmin``. + +SAN_PW + The corresponding password of SAN_UNAME. Not used when + ``san_private_key`` is set. Default password is ``password``. + +EQLX_GROUP + The group to be used for a pool where the Block Storage service will + create volumes and snapshots. Default group is ``group-0``. + +EQLX_POOL + The pool where the Block Storage service will create volumes and + snapshots. Default pool is ``default``. This option cannot be used + for multiple pools utilized by the Block Storage service on a single + Dell EqualLogic Group. + +EQLX_UNAME + The CHAP login account for each volume in a pool, if + ``use_chap_auth`` is set to ``true``. Default account name is + ``chapadmin``. + +EQLX_PW + The corresponding password of EQLX_UNAME. The default password is + randomly generated in hexadecimal, so you must set this password + manually. + +SAN_KEY_PATH (optional) + The filename of the private key used for SSH authentication. This + provides password-less login to the EqualLogic Group. Not used when + ``san_password`` is set. There is no default value. + +In addition, enable thin provisioning for SAN volumes using the default +``san_thin_provision = true`` setting. + +Multiple back-end configuration +------------------------------- + +The following example shows the typical configuration for a Block +Storage service that uses two Dell EqualLogic back ends: + +.. code-block:: ini + + enabled_backends = backend1,backend2 + san_ssh_port = 22 + ssh_conn_timeout = 30 + san_thin_provision = true + + [backend1] + volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver + volume_backend_name = backend1 + san_ip = IP_EQLX1 + san_login = SAN_UNAME + san_password = SAN_PW + eqlx_group_name = EQLX_GROUP + eqlx_pool = EQLX_POOL + + [backend2] + volume_driver = cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver + volume_backend_name = backend2 + san_ip = IP_EQLX2 + san_login = SAN_UNAME + san_password = SAN_PW + eqlx_group_name = EQLX_GROUP + eqlx_pool = EQLX_POOL + +In this example: + +- Thin provisioning for SAN volumes is enabled + (``san_thin_provision = true``). This is recommended when setting up + Dell EqualLogic back ends. + +- Each Dell EqualLogic back-end configuration (``[backend1]`` and + ``[backend2]``) has the same required settings as a single back-end + configuration, with the addition of ``volume_backend_name``. + +- The ``san_ssh_port`` option is set to its default value, 22. This + option sets the port used for SSH. + +- The ``ssh_conn_timeout`` option is also set to its default value, 30. + This option sets the timeout in seconds for CLI commands over SSH. + +- The ``IP_EQLX1`` and ``IP_EQLX2`` refer to the IP addresses used to + reach the Dell EqualLogic Group of ``backend1`` and ``backend2`` + through SSH, respectively. + +For information on configuring multiple back ends, see `Configure a +multiple-storage back +end `__. diff --git a/doc/source/config-reference/block-storage/drivers/dell-storagecenter-driver.rst b/doc/source/config-reference/block-storage/drivers/dell-storagecenter-driver.rst new file mode 100644 index 00000000000..1d8861ef3a2 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/dell-storagecenter-driver.rst @@ -0,0 +1,361 @@ +=================================================== +Dell Storage Center Fibre Channel and iSCSI drivers +=================================================== + +The Dell Storage Center volume driver interacts with configured Storage +Center arrays. + +The Dell Storage Center driver manages Storage Center arrays through +the Dell Storage Manager (DSM). DSM connection settings and Storage +Center options are defined in the ``cinder.conf`` file. + +Prerequisite: Dell Storage Manager 2015 R1 or later must be used. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The Dell Storage Center volume driver provides the following Cinder +volume operations: + +- Create, delete, attach (map), and detach (unmap) volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Create, delete, list and update a consistency group. +- Create, delete, and list consistency group snapshots. +- Manage an existing volume. +- Failover-host for replicated back ends. +- Create a replication using Live Volume. + +Extra spec options +~~~~~~~~~~~~~~~~~~ + +Volume type extra specs can be used to enable a variety of Dell Storage +Center options. Selecting Storage Profiles, Replay Profiles, enabling +replication, replication options including Live Volume and Active Replay +replication. + +Storage Profiles control how Storage Center manages volume data. For a +given volume, the selected Storage Profile dictates which disk tier +accepts initial writes, as well as how data progression moves data +between tiers to balance performance and cost. Predefined Storage +Profiles are the most effective way to manage data in Storage Center. + +By default, if no Storage Profile is specified in the volume extra +specs, the default Storage Profile for the user account configured for +the Block Storage driver is used. The extra spec key +``storagetype:storageprofile`` with the value of the name of the Storage +Profile on the Storage Center can be set to allow to use Storage +Profiles other than the default. + +For ease of use from the command line, spaces in Storage Profile names +are ignored. As an example, here is how to define two volume types using +the ``High Priority`` and ``Low Priority`` Storage Profiles: + +.. code-block:: console + + $ openstack volume type create "GoldVolumeType" + $ openstack volume type set --property storagetype:storageprofile=highpriority "GoldVolumeType" + $ openstack volume type create "BronzeVolumeType" + $ openstack volume type set --property storagetype:storageprofile=lowpriority "BronzeVolumeType" + +Replay Profiles control how often the Storage Center takes a replay of a +given volume and how long those replays are kept. The default profile is +the ``daily`` profile that sets the replay to occur once a day and to +persist for one week. + +The extra spec key ``storagetype:replayprofiles`` with the value of the +name of the Replay Profile or profiles on the Storage Center can be set +to allow to use Replay Profiles other than the default ``daily`` profile. + +As an example, here is how to define a volume type using the ``hourly`` +Replay Profile and another specifying both ``hourly`` and the default +``daily`` profile: + +.. code-block:: console + + $ openstack volume type create "HourlyType" + $ openstack volume type set --property storagetype:replayprofile=hourly "HourlyType" + $ openstack volume type create "HourlyAndDailyType" + $ openstack volume type set --property storagetype:replayprofiles=hourly,daily "HourlyAndDailyType" + +Note the comma separated string for the ``HourlyAndDailyType``. + +Replication for a given volume type is enabled via the extra spec +``replication_enabled``. + +To create a volume type that specifies only replication enabled back ends: + +.. code-block:: console + + $ openstack volume type create "ReplicationType" + $ openstack volume type set --property replication_enabled=' True' "ReplicationType" + +Extra specs can be used to configure replication. In addition to the Replay +Profiles above, ``replication:activereplay`` can be set to enable replication +of the volume's active replay. And the replication type can be changed to +synchronous via the ``replication_type`` extra spec can be set. + +To create a volume type that enables replication of the active replay: + +.. code-block:: console + + $ openstack volume type create "ReplicationType" + $ openstack volume type key --property replication_enabled=' True' "ReplicationType" + $ openstack volume type key --property replication:activereplay=' True' "ReplicationType" + +To create a volume type that enables synchronous replication : + +.. code-block:: console + + $ openstack volume type create "ReplicationType" + $ openstack volume type key --property replication_enabled=' True' "ReplicationType" + $ openstack volume type key --property replication_type=' sync' "ReplicationType" + +To create a volume type that enables replication using Live Volume: + +.. code-block:: console + + $ openstack volume type create "ReplicationType" + $ openstack volume type key --property replication_enabled=' True' "ReplicationType" + $ openstack volume type key --property replication:livevolume=' True' "ReplicationType" + +If QOS options are enabled on the Storage Center they can be enabled via extra +specs. The name of the Volume QOS can be specified via the +``storagetype:volumeqos`` extra spec. Likewise the name of the Group QOS to +use can be specificed via the ``storagetype:groupqos`` extra spec. Volumes +created with these extra specs set will be added to the specified QOS groups. + +To create a volume type that sets both Volume and Group QOS: + +.. code-block:: console + + $ openstack volume type create "StorageCenterQOS" + $ openstack volume type key --property 'storagetype:volumeqos'='unlimited' "StorageCenterQOS" + $ openstack volume type key --property 'storagetype:groupqos'='limited' "StorageCenterQOS" + +Data reduction profiles can be specified in the +``storagetype:datareductionprofile`` extra spec. Available options are None, +Compression, and Deduplication. Note that not all options are available on +every Storage Center. + +To create volume types that support no compression, compression, and +deduplication and compression respectively: + +.. code-block:: console + + $ openstack volume type create "NoCompressionType" + $ openstack volume type key --property 'storagetype:datareductionprofile'='None' "NoCompressionType" + $ openstack volume type create "CompressedType" + $ openstack volume type key --property 'storagetype:datareductionprofile'='Compression' "CompressedType" + $ openstack volume type create "DedupType" + $ openstack volume type key --property 'storagetype:datareductionprofile'='Deduplication' "DedupType" + +Note: The default is no compression. + +iSCSI configuration +~~~~~~~~~~~~~~~~~~~ + +Use the following instructions to update the configuration file for iSCSI: + +.. code-block:: ini + + default_volume_type = delliscsi + enabled_backends = delliscsi + + [delliscsi] + # Name to give this storage back-end + volume_backend_name = delliscsi + # The iSCSI driver to load + volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver + # IP address of DSM + san_ip = 172.23.8.101 + # DSM user name + san_login = Admin + # DSM password + san_password = secret + # The Storage Center serial number to use + dell_sc_ssn = 64702 + + # ==Optional settings== + + # The DSM API port + dell_sc_api_port = 3033 + # Server folder to place new server definitions + dell_sc_server_folder = devstacksrv + # Volume folder to place created volumes + dell_sc_volume_folder = devstackvol/Cinder + +Fibre Channel configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the following instructions to update the configuration file for fibre +channel: + +.. code-block:: ini + + default_volume_type = dellfc + enabled_backends = dellfc + + [dellfc] + # Name to give this storage back-end + volume_backend_name = dellfc + # The FC driver to load + volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver + # IP address of the DSM + san_ip = 172.23.8.101 + # DSM user name + san_login = Admin + # DSM password + san_password = secret + # The Storage Center serial number to use + dell_sc_ssn = 64702 + + # ==Optional settings== + + # The DSM API port + dell_sc_api_port = 3033 + # Server folder to place new server definitions + dell_sc_server_folder = devstacksrv + # Volume folder to place created volumes + dell_sc_volume_folder = devstackvol/Cinder + +Dual DSM +~~~~~~~~ + +It is possible to specify a secondary DSM to use in case the primary DSM fails. + +Configuration is done through the cinder.conf. Both DSMs have to be +configured to manage the same set of Storage Centers for this backend. That +means the dell_sc_ssn and any Storage Centers used for replication or Live +Volume. + +Add network and credential information to the backend to enable Dual DSM. + +.. code-block:: ini + + [dell] + # The IP address and port of the secondary DSM. + secondary_san_ip = 192.168.0.102 + secondary_sc_api_port = 3033 + # Specify credentials for the secondary DSM. + secondary_san_login = Admin + secondary_san_password = secret + +The driver will use the primary until a failure. At that point it will attempt +to use the secondary. It will continue to use the secondary until the volume +service is restarted or the secondary fails at which point it will attempt to +use the primary. + +Replication configuration +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add the following to the back-end specification to specify another Storage +Center to replicate to. + +.. code-block:: ini + + [dell] + replication_device = target_device_id: 65495, qosnode: cinderqos + +The ``target_device_id`` is the SSN of the remote Storage Center and the +``qosnode`` is the QoS Node setup between the two Storage Centers. + +Note that more than one ``replication_device`` line can be added. This will +slow things down, however. + +A volume is only replicated if the volume is of a volume-type that has +the extra spec ``replication_enabled`` set to `` True``. + +Replication notes +~~~~~~~~~~~~~~~~~ + +This driver supports both standard replication and Live Volume (if supported +and licensed). The main difference is that a VM attached to a Live Volume is +mapped to both Storage Centers. In the case of a failure of the primary Live +Volume still requires a failover-host to move control of the volume to the +second controller. + +Existing mappings should work and not require the instance to be remapped but +it might need to be rebooted. + +Live Volume is more resource intensive than replication. One should be sure +to plan accordingly. + +Failback +~~~~~~~~ + +The failover-host command is designed for the case where the primary system is +not coming back. If it has been executed and the primary has been restored it +is possible to attempt a failback. + +Simply specify default as the backend_id. + +.. code-block:: console + + $ cinder failover-host cinder@delliscsi --backend_id default + +Non trivial heavy lifting is done by this command. It attempts to recover best +it can but if things have diverged to far it can only do so much. It is also a +one time only command so do not reboot or restart the service in the middle of +it. + +Failover and failback are significant operations under OpenStack Cinder. Be +sure to consult with support before attempting. + +Server type configuration +~~~~~~~~~~~~~~~~~~~~~~~~~ + +This option allows one to set a default Server OS type to use when creating +a server definition on the Dell Storage Center. + +When attaching a volume to a node the Dell Storage Center driver creates a +server definition on the storage array. This defition includes a Server OS +type. The type used by the Dell Storage Center cinder driver is +"Red Hat Linux 6.x". This is a modern operating system definition that supports +all the features of an OpenStack node. + +Add the following to the back-end specification to specify the Server OS to use +when creating a server definition. The server type used must come from the drop +down list in the DSM. + +.. code-block:: ini + + [dell] + dell_server_os = 'Red Hat Linux 7.x' + +Note that this server definition is created once. Changing this setting after +the fact will not change an existing definition. The selected Server OS does +not have to match the actual OS used on the node. + +Excluding a domain +~~~~~~~~~~~~~~~~~~ + +This option excludes a Storage Center ISCSI fault domain from the ISCSI +properties returned by the initialize_connection call. This only applies to +the ISCSI driver. + +Add the excluded_domain_ip option into the backend config for each fault domain +to be excluded. This option takes the specified Target IPv4 Address listed +under the fault domain. Older versions of DSM (EM) may list this as the Well +Known IP Address. + +Add the following to the back-end specification to exclude the domains at +172.20.25.15 and 172.20.26.15. + +.. code-block:: ini + + [dell] + excluded_domain_ip=172.20.25.15 + excluded_domain_ip=172.20.26.15 + +Driver options +~~~~~~~~~~~~~~ + +The following table contains the configuration options specific to the +Dell Storage Center volume driver. + +.. include:: ../../tables/cinder-dellsc.rst diff --git a/doc/source/config-reference/block-storage/drivers/dothill-driver.rst b/doc/source/config-reference/block-storage/drivers/dothill-driver.rst new file mode 100644 index 00000000000..bb5e4370968 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/dothill-driver.rst @@ -0,0 +1,168 @@ +=================================================== +Dot Hill AssuredSAN Fibre Channel and iSCSI drivers +=================================================== + +The ``DotHillFCDriver`` and ``DotHillISCSIDriver`` volume drivers allow +Dot Hill arrays to be used for block storage in OpenStack deployments. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the Dot Hill drivers, the following are required: + +- Dot Hill AssuredSAN array with: + + - iSCSI or FC host interfaces + - G22x firmware or later + - Appropriate licenses for the snapshot and copy volume features + +- Network connectivity between the OpenStack host and the array + management interfaces + +- HTTPS or HTTP must be enabled on the array + +Supported operations +~~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Migrate a volume with back-end assistance. +- Retype a volume. +- Manage and unmanage a volume. + +Configuring the array +~~~~~~~~~~~~~~~~~~~~~ + +#. Verify that the array can be managed via an HTTPS connection. HTTP can + also be used if ``dothill_api_protocol=http`` is placed into the + appropriate sections of the ``cinder.conf`` file. + + Confirm that virtual pools A and B are present if you plan to use + virtual pools for OpenStack storage. + + If you plan to use vdisks instead of virtual pools, create or identify + one or more vdisks to be used for OpenStack storage; typically this will + mean creating or setting aside one disk group for each of the A and B + controllers. + +#. Edit the ``cinder.conf`` file to define an storage back-end entry for + each storage pool on the array that will be managed by OpenStack. Each + entry consists of a unique section name, surrounded by square brackets, + followed by options specified in ``key=value`` format. + + - The ``dothill_backend_name`` value specifies the name of the storage + pool or vdisk on the array. + + - The ``volume_backend_name`` option value can be a unique value, if + you wish to be able to assign volumes to a specific storage pool on + the array, or a name that is shared among multiple storage pools to + let the volume scheduler choose where new volumes are allocated. + + - The rest of the options will be repeated for each storage pool in a + given array: the appropriate Cinder driver name; IP address or + hostname of the array management interface; the username and password + of an array user account with ``manage`` privileges; and the iSCSI IP + addresses for the array if using the iSCSI transport protocol. + + In the examples below, two back ends are defined, one for pool A and one + for pool B, and a common ``volume_backend_name`` is used so that a + single volume type definition can be used to allocate volumes from both + pools. + + + **iSCSI example back-end entries** + + .. code-block:: ini + + [pool-a] + dothill_backend_name = A + volume_backend_name = dothill-array + volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + dothill_iscsi_ips = 10.2.3.4,10.2.3.5 + + [pool-b] + dothill_backend_name = B + volume_backend_name = dothill-array + volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + dothill_iscsi_ips = 10.2.3.4,10.2.3.5 + + **Fibre Channel example back-end entries** + + .. code-block:: ini + + [pool-a] + dothill_backend_name = A + volume_backend_name = dothill-array + volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + + [pool-b] + dothill_backend_name = B + volume_backend_name = dothill-array + volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + +#. If any ``volume_backend_name`` value refers to a vdisk rather than a + virtual pool, add an additional statement + ``dothill_backend_type = linear`` to that back-end entry. + +#. If HTTPS is not enabled in the array, include + ``dothill_api_protocol = http`` in each of the back-end definitions. + +#. If HTTPS is enabled, you can enable certificate verification with the + option ``dothill_verify_certificate=True``. You may also use the + ``dothill_verify_certificate_path`` parameter to specify the path to a + CA\_BUNDLE file containing CAs other than those in the default list. + +#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an + ``enabled_backends`` parameter specifying the back-end entries you added, + and a ``default_volume_type`` parameter specifying the name of a volume + type that you will create in the next step. + + **Example of [DEFAULT] section changes** + + .. code-block:: ini + + [DEFAULT] + # ... + enabled_backends = pool-a,pool-b + default_volume_type = dothill + # ... + +#. Create a new volume type for each distinct ``volume_backend_name`` value + that you added to cinder.conf. The example below assumes that the same + ``volume_backend_name=dothill-array`` option was specified in all of the + entries, and specifies that the volume type ``dothill`` can be used to + allocate volumes from any of them. + + **Example of creating a volume type** + + .. code-block:: console + + $ openstack volume type create dothill + $ openstack volume type set --property volume_backend_name=dothill-array dothill + +#. After modifying ``cinder.conf``, restart the ``cinder-volume`` service. + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific +to the Dot Hill drivers. + +.. include:: ../../tables/cinder-dothill.rst diff --git a/doc/source/config-reference/block-storage/drivers/emc-vmax-driver.rst b/doc/source/config-reference/block-storage/drivers/emc-vmax-driver.rst new file mode 100644 index 00000000000..040e01e3813 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/emc-vmax-driver.rst @@ -0,0 +1,1614 @@ +================================== +Dell EMC VMAX iSCSI and FC drivers +================================== + +The Dell EMC VMAX drivers, ``VMAXISCSIDriver`` and ``VMAXFCDriver``, support +the use of Dell EMC VMAX storage arrays with Block Storage. They both provide +equivalent functions and differ only in support for their respective host +attachment methods. + +The drivers perform volume operations by communicating with the back-end VMAX +storage. It uses a CIM client in Python called ``PyWBEM`` to perform CIM +operations over HTTP. + +The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It +is a CIM server that enables CIM clients to perform CIM operations over HTTP by +using SMI-S in the back end for VMAX storage operations. + +The Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative +(SMI), an ANSI standard for storage management. It supports the VMAX storage +system. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +The Cinder driver supports the VMAX-3 series. + +For VMAX-3 series, Solutions Enabler 8.3.0.11 or later is required. This +is SSL only. Refer to section below ``SSL support``. + +When installing Solutions Enabler, make sure you explicitly add the SMI-S +component. + +You can download Solutions Enabler from the Dell EMC's support web site +(login is required). See the ``Solutions Enabler 8.3.0 Installation and +Configuration Guide`` at ``support.emc.com``. + +Ensure that there is only one SMI-S (ECOM) server active on the same VMAX +array. + + +Required VMAX software suites for OpenStack +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are five Software Suites available for the VMAX All Flash and Hybrid: + +- Base Suite +- Advanced Suite +- Local Replication Suite +- Remote Replication Suite +- Total Productivity Pack + +OpenStack requires the Advanced Suite and the Local Replication Suite +or the Total Productivity Pack (it includes the Advanced Suite and the +Local Replication Suite) for the VMAX All Flash and Hybrid. + +Each are licensed separately. For further details on how to get the +relevant license(s), reference eLicensing Support below. + + +eLicensing support +~~~~~~~~~~~~~~~~~~ + +To activate your entitlements and obtain your VMAX license files, visit the +Service Center on ``_, as directed on your License +Authorization Code (LAC) letter emailed to you. + +- For help with missing or incorrect entitlements after activation + (that is, expected functionality remains unavailable because it is not + licensed), contact your EMC account representative or authorized reseller. + +- For help with any errors applying license files through Solutions Enabler, + contact the Dell EMC Customer Support Center. + +- If you are missing a LAC letter or require further instructions on + activating your licenses through the Online Support site, contact EMC's + worldwide Licensing team at ``licensing@emc.com`` or call: + + North America, Latin America, APJK, Australia, New Zealand: SVC4EMC + (800-782-4362) and follow the voice prompts. + + EMEA: +353 (0) 21 4879862 and follow the voice prompts. + + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +VMAX drivers support these operations: + +- Create, list, delete, attach, and detach volumes +- Create, list, and delete volume snapshots +- Copy an image to a volume +- Copy a volume to an image +- Clone a volume +- Extend a volume +- Retype a volume (Host and storage assisted volume migration) +- Create a volume from a snapshot +- Create and delete consistency group +- Create and delete consistency group snapshot +- Modify consistency group (add and remove volumes) +- Create consistency group from source +- Create and delete generic volume group +- Create and delete generice volume group snapshot +- Modify generic volume group (add and remove volumes) +- Create generic volume group from source + +VMAX drivers also support the following features: + +- Dynamic masking view creation +- Dynamic determination of the target iSCSI IP address +- iSCSI multipath support +- Oversubscription +- Live Migration +- Attach and detach snapshots +- Volume replication + +VMAX All Flash and Hybrid: + +- Service Level support +- SnapVX support +- All Flash support +- Compression support + +.. note:: + + VMAX All Flash array with Solutions Enabler 8.3.0.11 or later have + compression enabled by default when associated with Diamond Service Level. + This means volumes added to any newly created storage groups will be + compressed. + +Setup VMAX drivers +~~~~~~~~~~~~~~~~~~ + +.. table:: **Pywbem Versions** + + +------------+-----------------------------------+ + | Pywbem | Ubuntu14.04(LTS),Ubuntu16.04(LTS),| + | Version | Red Hat Enterprise Linux, CentOS | + | | and Fedora | + +============+=================+=================+ + | | Python2 | Python3 | + + +-------+---------+-------+---------+ + | | pip | Native | pip | Native | + +------------+-------+---------+-------+---------+ + | 0.9.0 | No | N/A | Yes | N/A | + +------------+-------+---------+-------+---------+ + | 0.8.4 | No | N/A | Yes | N/A | + +------------+-------+---------+-------+---------+ + | 0.7.0 | No | Yes | No | Yes | + +------------+-------+---------+-------+---------+ + +.. note:: + + On Python2, use the updated distro version, for example: + + .. code-block:: console + + # apt-get install python-pywbem + +.. note:: + + On Python3, use the official pywbem version (V0.9.0 or v0.8.4). + +#. Install the ``python-pywbem`` package for your distribution. + + - On Ubuntu: + + .. code-block:: console + + # apt-get install python-pywbem + + - On openSUSE: + + .. code-block:: console + + # zypper install python-pywbem + + - On Red Hat Enterprise Linux, CentOS, and Fedora: + + .. code-block:: console + + # yum install pywbem + + .. note:: + + A potential issue can exist with the ``python-pywbem`` dependency package, + especially M2crypto. To troubleshot and resolve these types of issues, + follow these steps. + + - On Ubuntu: + + .. code-block:: console + + # apt-get remove --purge -y python-m2crypto + # pip uninstall pywbem + # apt-get install python-pywbem + + - On openSUSE: + + .. code-block:: console + + # zypper remove --clean-deps python-m2crypto + # pip uninstall pywbem + # zypper install python-pywbem + + - On Red Hat Enterprise Linux, CentOS, and Fedora: + + .. code-block:: console + + # yum remove python-m2crypto + # sudo pip uninstall pywbem + # yum install pywbem + +#. Install iSCSI Utilities (for iSCSI drivers only). + + #. Download and configure the Cinder node as an iSCSI initiator. + #. Install the ``open-iscsi`` package. + + - On Ubuntu: + + .. code-block:: console + + # apt-get install open-iscsi + + - On openSUSE: + + .. code-block:: console + + # zypper install open-iscsi + + - On Red Hat Enterprise Linux, CentOS, and Fedora: + + .. code-block:: console + + # yum install scsi-target-utils.x86_64 + + #. Enable the iSCSI driver to start automatically. + +#. Download Solutions Enabler from ``support.emc.com`` and install it. + Make sure you install the SMIS component. A [Y]es response installs the + ``SMISPROVIDER`` component. + + .. code-block:: console + + Install EMC Solutions Enabler SMIS Component ? [N]:Y + + You can install Solutions Enabler on a non-OpenStack host. Supported + platforms include different flavors of Windows, Red Hat, and SUSE Linux. + Solutions Enabler can be installed on a physical server or a VM hosted by + an ESX server. Note that the supported hypervisor for a VM running + Solutions Enabler is ESX only. See the ``Solutions Enabler 8.3.0 + Installation and Configuration Guide`` on ``support.emc.com`` for more + details. + + .. note:: + + You must discover storage arrays on the ECOM before you can use + the VMAX drivers. Follow instructions in ``Solutions Enabler 8.3.0 + Installation and Configuration Guide`` on ``support.emc.com`` for more + details. + + The ECOM server is usually installed at ``/opt/emc/ECIM/ECOM/bin`` on Linux + and ``C:\Program Files\EMC\ECIM\ECOM\bin`` on Windows. After you install and + configure the ECOM, go to that directory and type ``TestSmiProvider.exe`` + for windows and ``./TestSmiProvider`` for linux + + Use ``addsys`` in ``TestSmiProvider`` to add an array. Use ``dv`` and examine + the output after the array is added. In advance of ``TestSmiProvider``, + arrays need to be discovered on the Solutions Enabler by using the + :command:`symcfg discover` command. Make sure that the arrays are recognized by the + SMI-S server before using the EMC VMAX drivers. + +#. Configure Block Storage + + Add the following entries to ``/etc/cinder/cinder.conf``: + + .. code-block:: ini + + enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC + + [CONF_GROUP_ISCSI] + volume_driver = cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver + cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml + volume_backend_name = ISCSI_backend + + [CONF_GROUP_FC] + volume_driver = cinder.volume.drivers.dell_emc.vmax.fc.EMCVMAXFCDriver + cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml + volume_backend_name = FC_backend + + In this example, two back-end configuration groups are enabled: + ``CONF_GROUP_ISCSI`` and ``CONF_GROUP_FC``. Each configuration group has a + section describing unique parameters for connections, drivers, the + ``volume_backend_name``, and the name of the EMC-specific configuration file + containing additional settings. Note that the file name is in the format + ``/etc/cinder/cinder_emc_config_[confGroup].xml``. + + Once the ``cinder.conf`` and EMC-specific configuration files have been + created, :command:`openstack` commands need to be issued in order to create and + associate OpenStack volume types with the declared ``volume_backend_names``: + + .. code-block:: console + + $ openstack volume type create VMAX_ISCSI + $ openstack volume type set --property volume_backend_name=ISCSI_backend VMAX_ISCSI + $ openstack volume type create VMAX_FC + $ openstack volume type set --property volume_backend_name=FC_backend VMAX_FC + + By issuing these commands, the Block Storage volume type ``VMAX_ISCSI`` is + associated with the ``ISCSI_backend``, and the type ``VMAX_FC`` is + associated with the ``FC_backend``. + + + Create the ``/etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml`` file. + You do not need to restart the service for this change. + + Add the following lines to the XML file: + + VMAX All Flash and Hybrid + .. code-block:: xml + + + + 1.1.1.1 + 00 + user1 + password1 + + OS-PORTGROUP1-PG + OS-PORTGROUP2-PG + + 111111111111 + SRP_1 + Diamond + OLTP + + + Where: + +.. note:: + + VMAX Hybrid supports Optimized, Diamond, Platinum, Gold, Silver, Bronze, and + NONE service levels. VMAX All Flash supports Diamond and NONE. Both + support DSS_REP, DSS, OLTP_REP, OLTP, and NONE workloads. + +``EcomServerIp`` + IP address of the ECOM server which is packaged with SMI-S. + +``EcomServerPort`` + Port number of the ECOM server which is packaged with SMI-S. + +``EcomUserName`` and ``EcomPassword`` + Credentials for the ECOM server. + +``PortGroups`` + Supplies the names of VMAX port groups that have been pre-configured to + expose volumes managed by this backend. Each supplied port group should + have sufficient number and distribution of ports (across directors and + switches) as to ensure adequate bandwidth and failure protection for the + volume connections. PortGroups can contain one or more port groups of + either iSCSI or FC ports. When a dynamic masking view is created by the + VMAX driver, the port group is chosen randomly from the PortGroup list, to + evenly distribute load across the set of groups provided. Make sure that + the PortGroups set contains either all FC or all iSCSI port groups (for a + given back end), as appropriate for the configured driver (iSCSI or FC). + +``Array`` + Unique VMAX array serial number. + +``Pool`` + Unique pool name within a given array. For back ends not using FAST + automated tiering, the pool is a single pool that has been created by the + administrator. For back ends exposing FAST policy automated tiering, the + pool is the bind pool to be used with the FAST policy. + +``ServiceLevel`` + VMAX All Flash and Hybrid only. The Service Level manages the underlying + storage to provide expected performance. Omitting the ``ServiceLevel`` + tag means that non FAST storage groups will be created instead + (storage groups not associated with any service level). + +``Workload`` + VMAX All Flash and Hybrid only. When a workload type is added, the latency + range is reduced due to the added information. Omitting the ``Workload`` + tag means the latency range will be the widest for its SLO type. + +FC Zoning with VMAX +~~~~~~~~~~~~~~~~~~~ + +Zone Manager is required when there is a fabric between the host and array. +This is necessary for larger configurations where pre-zoning would be too +complex and open-zoning would raise security concerns. + +iSCSI with VMAX +~~~~~~~~~~~~~~~ + +- Make sure the ``iscsi-initiator-utils`` package is installed on all Compute + nodes. + +.. note:: + + You can only ping the VMAX iSCSI target ports when there is a valid masking + view. An attach operation creates this masking view. + +VMAX masking view and group naming info +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Masking view names +------------------ + +Masking views are dynamically created by the VMAX FC and iSCSI drivers using +the following naming conventions. ``[protocol]`` is either ``I`` for volumes +attached over iSCSI or ``F`` for volumes attached over Fiber Channel. + +VMAX All Flash and Hybrid + +.. code-block:: text + + OS-[shortHostName]-[SRP]-[SLO]-[workload]-[protocol]-MV + +Initiator group names +--------------------- + +For each host that is attached to VMAX volumes using the drivers, an initiator +group is created or re-used (per attachment type). All initiators of the +appropriate type known for that host are included in the group. At each new +attach volume operation, the VMAX driver retrieves the initiators (either WWNNs +or IQNs) from OpenStack and adds or updates the contents of the Initiator Group +as required. Names are of the following format. ``[protocol]`` is either ``I`` +for volumes attached over iSCSI or ``F`` for volumes attached over Fiber +Channel. + +.. code-block:: text + + OS-[shortHostName]-[protocol]-IG + +.. note:: + + Hosts attaching to OpenStack managed VMAX storage cannot also attach to + storage on the same VMAX that are not managed by OpenStack. + +FA port groups +-------------- + +VMAX array FA ports to be used in a new masking view are chosen from the list +provided in the EMC configuration file. + +Storage group names +------------------- + +As volumes are attached to a host, they are either added to an existing storage +group (if it exists) or a new storage group is created and the volume is then +added. Storage groups contain volumes created from a pool (either single-pool +or FAST-controlled), attached to a single host, over a single connection type +(iSCSI or FC). ``[protocol]`` is either ``I`` for volumes attached over iSCSI +or ``F`` for volumes attached over Fiber Channel. + +VMAX All Flash and Hybrid + +.. code-block:: text + + OS-[shortHostName]-[SRP]-[SLO]-[Workload]-[protocol]-SG + + +Interval and Retries +-------------------- + +By default, ``Interval`` and ``Retries`` are ``10`` seconds and ``60`` +retries respectively. These determine how long (``Interval``) and how many +times (``Retries``) a user is willing to wait for a single SMIS call, +``10*60=300seconds``. Depending on usage, these may need to be overriden by +the user in the XML file. For example, if performance is a factor, then the +``Interval`` should be decreased to check the job status more frequently, +and if multiple concurrent provisioning requests are issued then ``Retries`` +should be increased so calls will not timeout prematurely. + +In the example below, the driver checks every 5 seconds for the status of the +job. It will continue checking for 120 retries before it times out. + +Add the following lines to the XML file: + + VMAX All Flash and Hybrid + + .. code-block:: xml + + + + 1.1.1.1 + 00 + user1 + password1 + + OS-PORTGROUP1-PG + OS-PORTGROUP2-PG + + 111111111111 + SRP_1 + 5 + 120 + + +SSL support +~~~~~~~~~~~ + +.. note:: + The ECOM component in Solutions Enabler enforces SSL in 8.3.0.1 or later. + By default, this port is 5989. + +#. Get the CA certificate of the ECOM server. This pulls the CA cert file and + saves it as .pem file. The ECOM server IP address or hostname is ``my_ecom_host``. + The sample name of the .pem file is ``ca_cert.pem``: + + .. code-block:: console + + # openssl s_client -showcerts -connect my_ecom_host:5989 /dev/null|openssl x509 -outform PEM >ca_cert.pem + +#. Copy the pem file to the system certificate directory: + + .. code-block:: console + + # cp ca_cert.pem /usr/share/ca-certificates/ca_cert.crt + +#. Update CA certificate database with the following commands: + + .. code-block:: console + + # sudo dpkg-reconfigure ca-certificates + + .. note:: + Check that the new ``ca_cert.crt`` will activate by selecting + :guilabel:`ask` on the dialog. If it is not enabled for activation, use the + down and up keys to select, and the space key to enable or disable. + + .. code-block:: console + + # sudo update-ca-certificates + +#. Update :file:`/etc/cinder/cinder.conf` to reflect SSL functionality by + adding the following to the back end block. ``my_location`` is the location + of the .pem file generated in step one: + + .. code-block:: ini + + driver_ssl_cert_verify = False + driver_use_ssl = True + + If you skip steps two and three, you must add the location of you .pem file. + + .. code-block:: ini + + driver_ssl_cert_verify = False + driver_use_ssl = True + driver_ssl_cert_path = /my_location/ca_cert.pem + +#. Update EcomServerIp to ECOM host name and EcomServerPort to secure port + (5989 by default) in :file:`/etc/cinder/cinder_emc_config_.xml`. + + +Oversubscription support +~~~~~~~~~~~~~~~~~~~~~~~~ + +Oversubscription support requires the ``/etc/cinder/cinder.conf`` to be +updated with two additional tags ``max_over_subscription_ratio`` and +``reserved_percentage``. In the sample below, the value of 2.0 for +``max_over_subscription_ratio`` means that the pools in oversubscribed by a +factor of 2, or 200% oversubscribed. The ``reserved_percentage`` is the high +water mark where by the physical remaining space cannot be exceeded. +For example, if there is only 4% of physical space left and the reserve +percentage is 5, the free space will equate to zero. This is a safety +mechanism to prevent a scenario where a provisioning request fails due to +insufficient raw space. + +The parameter ``max_over_subscription_ratio`` and ``reserved_percentage`` are +optional. + +To set these parameter go to the configuration group of the volume type in +:file:`/etc/cinder/cinder.conf`. + +.. code-block:: ini + + [VMAX_ISCSI_SILVER] + cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xml + volume_driver = cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver + volume_backend_name = VMAX_ISCSI_SILVER + max_over_subscription_ratio = 2.0 + reserved_percentage = 10 + +For the second iteration of over subscription, take into account the +EMCMaxSubscriptionPercent property on the pool. This value is the highest +that a pool can be oversubscribed. + +Scenario 1 +---------- + +``EMCMaxSubscriptionPercent`` is 200 and the user defined +``max_over_subscription_ratio`` is 2.5, the latter is ignored. +Oversubscription is 200%. + +Scenario 2 +---------- + +``EMCMaxSubscriptionPercent`` is 200 and the user defined +``max_over_subscription_ratio`` is 1.5, 1.5 equates to 150% and is less than +the value set on the pool. Oversubscription is 150%. + +Scenario 3 +---------- + +``EMCMaxSubscriptionPercent`` is 0. This means there is no upper limit on the +pool. The user defined ``max_over_subscription_ratio`` is 1.5. +Oversubscription is 150%. + +Scenario 4 +---------- + +``EMCMaxSubscriptionPercent`` is 0. ``max_over_subscription_ratio`` is not +set by the user. We recommend to default to upper limit, this is 150%. + +.. note:: + If FAST is set and multiple pools are associated with a FAST policy, + then the same rules apply. The difference is, the TotalManagedSpace and + EMCSubscribedCapacity for each pool associated with the FAST policy are + aggregated. + +Scenario 5 +---------- + +``EMCMaxSubscriptionPercent`` is 200 on one pool. It is 300 on another pool. +The user defined ``max_over_subscription_ratio`` is 2.5. Oversubscription is +200% on the first pool and 250% on the other. + +QoS (Quality of Service) support +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Quality of service(QoS) has traditionally been associated with network +bandwidth usage. Network administrators set limitations on certain networks +in terms of bandwidth usage for clients. This enables them to provide a +tiered level of service based on cost. The cinder QoS offers similar +functionality based on volume type setting limits on host storage bandwidth +per service offering. Each volume type is tied to specific QoS attributes +that are unique to each storage vendor. The VMAX plugin offers limits via +the following attributes: + +- By I/O limit per second (IOPS) +- By limiting throughput per second (MB/S) +- Dynamic distribution +- The VMAX offers modification of QoS at the Storage Group level + +USE CASE 1 - Default values +--------------------------- + +Prerequisites - VMAX + +- Host I/O Limit (MB/Sec) - No Limit +- Host I/O Limit (IO/Sec) - No Limit +- Set Dynamic Distribution - N/A + +.. table:: **Prerequisites - Block Storage (cinder) back end (storage group)** + + +-------------------+--------+ + | Key | Value | + +===================+========+ + | maxIOPS | 4000 | + +-------------------+--------+ + | maxMBPS | 4000 | + +-------------------+--------+ + | DistributionType | Always | + +-------------------+--------+ + +#. Create QoS Specs with the prerequisite values above: + + .. code-block:: console + + $ openstack volume qos create --property maxIOPS=4000 maxMBPS=4000 DistributionType=Always SILVER + +#. Associate QoS specs with specified volume type: + + .. code-block:: console + + $ openstack volume qos associate SILVER VOLUME_TYPE + +#. Create volume with the volume type indicated above: + + .. code-block:: console + + $ openstack volume create --size 1 --type VOLUME_TYPE TEST_VOLUME + +**Outcome - VMAX (storage group)** + +- Host I/O Limit (MB/Sec) - 4000 +- Host I/O Limit (IO/Sec) - 4000 +- Set Dynamic Distribution - Always + +**Outcome - Block Storage (cinder)** + +Volume is created against volume type and QoS is enforced with the parameters +above. + +USE CASE 2 - Preset limits +-------------------------- + +Prerequisites - VMAX + +- Host I/O Limit (MB/Sec) - 2000 +- Host I/O Limit (IO/Sec) - 2000 +- Set Dynamic Distribution - Never + +.. table:: **Prerequisites - Block Storage (cinder) back end (storage group)** + + +-------------------+--------+ + | Key | Value | + +===================+========+ + | maxIOPS | 4000 | + +-------------------+--------+ + | maxMBPS | 4000 | + +-------------------+--------+ + | DistributionType | Always | + +-------------------+--------+ + +#. Create QoS specifications with the prerequisite values above: + + .. code-block:: console + + $ openstack volume qos create --property maxIOPS=4000 maxMBPS=4000 DistributionType=Always SILVER + +#. Associate QoS specifications with specified volume type: + + .. code-block:: console + + $ openstack volume qos associate SILVER VOLUME_TYPE + +#. Create volume with the volume type indicated above: + + .. code-block:: console + + $ openstack volume create --size 1 --type VOLUME_TYPE TEST_VOLUME + +**Outcome - VMAX (storage group)** + +- Host I/O Limit (MB/Sec) - 4000 +- Host I/O Limit (IO/Sec) - 4000 +- Set Dynamic Distribution - Always + +**Outcome - Block Storage (cinder)** + +Volume is created against volume type and QoS is enforced with the parameters +above. + + +USE CASE 3 - Preset limits +-------------------------- + +Prerequisites - VMAX + +- Host I/O Limit (MB/Sec) - No Limit +- Host I/O Limit (IO/Sec) - No Limit +- Set Dynamic Distribution - N/A + +.. table:: **Prerequisites - Block Storage (cinder) back end (storage group)** + + +-------------------+--------+ + | Key | Value | + +===================+========+ + | DistributionType | Always | + +-------------------+--------+ + +#. Create QoS specifications with the prerequisite values above: + + .. code-block:: console + + $ openstack volume qos create --property DistributionType=Always SILVER + +#. Associate QoS specifications with specified volume type: + + .. code-block:: console + + $ openstack volume qos associate SILVER VOLUME_TYPE + +#. Create volume with the volume type indicated above: + + .. code-block:: console + + $ openstack volume create --size 1 --type VOLUME_TYPE TEST_VOLUME + +**Outcome - VMAX (storage group)** + +- Host I/O Limit (MB/Sec) - No Limit +- Host I/O Limit (IO/Sec) - No Limit +- Set Dynamic Distribution - N/A + +**Outcome - Block Storage (cinder)** + +Volume is created against volume type and there is no QoS change. + +USE CASE 4 - Preset limits +-------------------------- + +Prerequisites - VMAX + +- Host I/O Limit (MB/Sec) - No Limit +- Host I/O Limit (IO/Sec) - No Limit +- Set Dynamic Distribution - N/A + +.. table:: **Prerequisites - Block Storage (cinder) back end (storage group)** + + +-------------------+-----------+ + | Key | Value | + +===================+===========+ + | DistributionType | OnFailure | + +-------------------+-----------+ + +#. Create QoS specifications with the prerequisite values above: + + .. code-block:: console + + $ openstack volume qos create --property DistributionType=OnFailure SILVER + +#. Associate QoS specifications with specified volume type: + + .. code-block:: console + + $ openstack volume qos associate SILVER VOLUME_TYPE + + +#. Create volume with the volume type indicated above: + + .. code-block:: console + + $ openstack volume create --size 1 --type VOLUME_TYPE TEST_VOLUME + +**Outcome - VMAX (storage group)** + +- Host I/O Limit (MB/Sec) - No Limit +- Host I/O Limit (IO/Sec) - No Limit +- Set Dynamic Distribution - N/A + +**Outcome - Block Storage (cinder)** + +Volume is created against volume type and there is no QoS change. + +iSCSI multipathing support +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- Install open-iscsi on all nodes on your system +- Do not install EMC PowerPath as they cannot co-exist with native multipath + software +- Multipath tools must be installed on all nova compute nodes + +On Ubuntu: + +.. code-block:: console + + # apt-get install open-iscsi #ensure iSCSI is installed + # apt-get install multipath-tools #multipath modules + # apt-get install sysfsutils sg3-utils #file system utilities + # apt-get install scsitools #SCSI tools + +On openSUSE and SUSE Linux Enterprise Server: + +.. code-block:: console + + # zipper install open-iscsi #ensure iSCSI is installed + # zipper install multipath-tools #multipath modules + # zipper install sysfsutils sg3-utils #file system utilities + # zipper install scsitools #SCSI tools + +On Red Hat Enterprise Linux and CentOS: + +.. code-block:: console + + # yum install iscsi-initiator-utils #ensure iSCSI is installed + # yum install device-mapper-multipath #multipath modules + # yum install sysfsutils sg3-utils #file system utilities + # yum install scsitools #SCSI tools + + +Multipath configuration file +---------------------------- + +The multipath configuration file may be edited for better management and +performance. Log in as a privileged user and make the following changes to +:file:`/etc/multipath.conf` on the Compute (nova) node(s). + +.. code-block:: vim + + devices { + # Device attributed for EMC VMAX + device { + vendor "EMC" + product "SYMMETRIX" + path_grouping_policy multibus + getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n" + path_selector "round-robin 0" + path_checker tur + features "0" + hardware_handler "0" + prio const + rr_weight uniform + no_path_retry 6 + rr_min_io 1000 + rr_min_io_rq 1 + } + } + +You may need to reboot the host after installing the MPIO tools or restart +iSCSI and multipath services. + +On Ubuntu: + +.. code-block:: console + + # service open-iscsi restart + # service multipath-tools restart + +On openSUSE, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and +CentOS: + +.. code-block:: console + + # systemctl restart open-iscsi + # systemctl restart multipath-tools + +.. code-block:: console + + $ lsblk + NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT + sda 8:0 0 1G 0 disk + ..360000970000196701868533030303235 (dm-6) 252:6 0 1G 0 mpath + sdb 8:16 0 1G 0 disk + ..360000970000196701868533030303235 (dm-6) 252:6 0 1G 0 mpath + vda 253:0 0 1T 0 disk + +OpenStack configurations +------------------------ + +On Compute (nova) node, add the following flag in the ``[libvirt]`` section of +:file:`/etc/nova/nova.conf`: + +.. code-block:: ini + + iscsi_use_multipath = True + +On cinder controller node, set the multipath flag to true in +:file:`/etc/cinder/cinder.conf`: + +.. code-block:: ini + + use_multipath_for_image_xfer = True + +Restart ``nova-compute`` and ``cinder-volume`` services after the change. + +Verify you have multiple initiators available on the compute node for I/O +------------------------------------------------------------------------- + +#. Create a 3GB VMAX volume. +#. Create an instance from image out of native LVM storage or from VMAX + storage, for example, from a bootable volume +#. Attach the 3GB volume to the new instance: + + .. code-block:: console + + $ multipath -ll + mpath102 (360000970000196700531533030383039) dm-3 EMC,SYMMETRIX + size=3G features='1 queue_if_no_path' hwhandler='0' wp=rw + '-+- policy='round-robin 0' prio=1 status=active + 33:0:0:1 sdb 8:16 active ready running + '- 34:0:0:1 sdc 8:32 active ready running + +#. Use the ``lsblk`` command to see the multipath device: + + .. code-block:: console + + $ lsblk + NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT + sdb 8:0 0 3G 0 disk + ..360000970000196700531533030383039 (dm-6) 252:6 0 3G 0 mpath + sdc 8:16 0 3G 0 disk + ..360000970000196700531533030383039 (dm-6) 252:6 0 3G 0 mpath + vda + +Consistency group support +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consistency Groups operations are performed through the CLI using v2 of +the cinder API. + +:file:`/etc/cinder/policy.json` may need to be updated to enable new API calls +for Consistency groups. + +.. note:: + Even though the terminology is 'Consistency Group' in OpenStack, a Storage + Group is created on the VMAX, and should not be confused with a VMAX + Consistency Group which is an SRDF feature. The Storage Group is not + associated with any Service Level. + +Operations +---------- + +* Create a Consistency Group: + + .. code-block:: console + + cinder --os-volume-api-version 2 consisgroup-create [--name ] + [--description ] [--availability-zone ] + + + .. code-block:: console + + $ cinder --os-volume-api-version 2 consisgroup-create --name bronzeCG2 volume_type_1 + +* List Consistency Groups: + + .. code-block:: console + + cinder consisgroup-list [--all-tenants [<0|1>]] + + .. code-block:: console + + $ cinder consisgroup-list + +* Show a Consistency Group: + + .. code-block:: console + + cinder consisgroup-show + + .. code-block:: console + + $ cinder consisgroup-show 38a604b7-06eb-4202-8651-dbf2610a0827 + +* Update a consistency Group: + + .. code-block:: console + + cinder consisgroup-update [--name ] [--description ] + [--add-volumes ] [--remove-volumes ] + + + Change name: + + .. code-block:: console + + $ cinder consisgroup-update --name updated_name 38a604b7-06eb-4202-8651-dbf2610a0827 + + Add volume(s) to a Consistency Group: + + .. code-block:: console + + $ cinder consisgroup-update --add-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827 + + Delete volume(s) from a Consistency Group: + + .. code-block:: console + + $ cinder consisgroup-update --remove-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827 + +* Create a snapshot of a Consistency Group: + + .. code-block:: console + + cinder cgsnapshot-create [--name ] [--description ] + + + .. code-block:: console + + $ cinder cgsnapshot-create 618d962d-2917-4cca-a3ee-9699373e6625 + +* Delete a snapshot of a Consistency Group: + + .. code-block:: console + + cinder cgsnapshot-delete [ ...] + + .. code-block:: console + + $ cinder cgsnapshot-delete 618d962d-2917-4cca-a3ee-9699373e6625 + +* Delete a Consistency Group: + + .. code-block:: console + + cinder consisgroup-delete [--force] [ ...] + + .. code-block:: console + + $ cinder consisgroup-delete --force 618d962d-2917-4cca-a3ee-9699373e6625 + +* Create a Consistency group from source: + + .. code-block:: console + + cinder consisgroup-create-from-src [--cgsnapshot ] + [--source-cg ] [--name ] [--description ] + + .. code-block:: console + + $ cinder consisgroup-create-from-src --source-cg 25dae184-1f25-412b-b8d7-9a25698fdb6d + + .. code-block:: console + + $ cinder consisgroup-create-from-src --cgsnapshot 618d962d-2917-4cca-a3ee-9699373e6625 + +* You can also create a volume in a consistency group in one step: + + .. code-block:: console + + $ openstack volume create [--consistency-group consistency-group>] + [--description ] [--type ] + [--availability-zone ] [--size ] + + .. code-block:: console + + $ openstack volume create --type volume_type_1 ----consistency-group \ + 1de80c27-3b2f-47a6-91a7-e867cbe36462 --size 1 cgBronzeVol + + +Workload Planner (WLP) +~~~~~~~~~~~~~~~~~~~~~~ + +VMAX Hybrid allows you to manage application storage by using Service Level +Objectives (SLO) using policy based automation rather than the tiering in the +VMAX2. The VMAX Hybrid comes with up to 6 SLO policies defined. Each has a +set of workload characteristics that determine the drive types and mixes +which will be used for the SLO. All storage in the VMAX Array is virtually +provisioned, and all of the pools are created in containers called Storage +Resource Pools (SRP). Typically there is only one SRP, however there can be +more. Therefore, it is the same pool we will provision to but we can provide +different SLO/Workload combinations. + +The SLO capacity is retrieved by interfacing with Unisphere Workload Planner +(WLP). If you do not set up this relationship then the capacity retrieved is +that of the entire SRP. This can cause issues as it can never be an accurate +representation of what storage is available for any given SLO and Workload +combination. + +Enabling WLP on Unisphere +------------------------- + +#. To enable WLP on Unisphere, click on the + :menuselection:`array-->Performance-->Settings`. +#. Set both the :guilabel:`Real Time` and the :guilabel:`Root Cause Analysis`. +#. Click :guilabel:`Register`. + +.. note:: + + This should be set up ahead of time (allowing for several hours of data + collection), so that the Unisphere for VMAX Performance Analyzer can + collect rated metrics for each of the supported element types. + +Using TestSmiProvider to add statistics access point +---------------------------------------------------- + +After enabling WLP you must then enable SMI-S to gain access to the WLP data: + +#. Connect to the SMI-S Provider using TestSmiProvider. +#. Navigate to the :guilabel:`Active` menu. +#. Type ``reg`` and enter the noted responses to the questions: + + .. code-block:: console + + (EMCProvider:5989) ? reg + Current list of statistics Access Points: ? + Note: The current list will be empty if there are no existing Access Points. + Add Statistics Access Point {y|n} [n]: y + HostID [l2se0060.lss.emc.com]: ? + Note: Enter the Unisphere for VMAX location using a fully qualified Host ID. + Port [8443]: ? + Note: The Port default is the Unisphere for VMAX default secure port. If the secure port + is different for your Unisphere for VMAX setup, adjust this value accordingly. + User [smc]: ? + Note: Enter the Unisphere for VMAX username. + Password [smc]: ? + Note: Enter the Unisphere for VMAX password. + +#. Type ``reg`` again to view the current list: + + .. code-block:: console + + (EMCProvider:5988) ? reg + Current list of statistics Access Points: + HostIDs: + l2se0060.lss.emc.com + PortNumbers: + 8443 + Users: + smc + Add Statistics Access Point {y|n} [n]: n + + +Attach and detach snapshots +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``Attach snapshot`` and ``Detach snapshot`` are used internally by +non-disruptive backup and backup snapshot. As of the Newton release, +it is possible to back up a volume, but not possible to directly back up +a snapshot. Volume back up functionality has been available ever since backups +were introduced into the Cinder service. The ability to back up a volume +directly is valuable because you can back up a volume in one step. Users can +take snapshots from the volumes as a way to protect their data. These snapshots +reside on the storage backend itself. Providing a way +to backup snapshots directly allows users to protect the snapshots +taken from the volumes on a backup device, separately from the storage +backend. + +There are users who have taken many snapshots and would like a way to protect +these snapshots. The functionality to backup snapshots provides another layer +of data protection. + +Please refer to `backup and restore volumes and +snapshots ` +for more more information. + +Enable attach and detach snapshot functionality +----------------------------------------------- + +#. Ensure that the ``cinder-backup`` service is running. +#. The backup driver for the swift back end performs a volume backup to an + object storage system. To enable the swift backup driver, include the + following option in the ``cinder.conf`` file: + + .. code-block:: yaml + + backup_driver = cinder.backup.drivers.swift + +#. In order to force the volume to run attach and detach on the snapshot + and not the volume you need to put the following key-value pair in the + ``[DEFAULT]`` section of the ``cinder.conf``: + + .. code-block:: console + + backup_use_same_host = True + +.. note:: + + You may need to increase the message queue timeout value which is 60 by + default in the ``[DEFAULT]`` section of the ``cinder.conf``. This is + necessary because the snapshot may take more than this time. + + .. code-block:: console + + rpc_response_timeout = 240 + +Use case 1 - Create a volume backup when the volume is in-use +------------------------------------------------------------- + +#. Create a bootable volume and launch it so the volume status is in-use. +#. Create a backup of the volume, where ``VOLUME`` + is the volume name or volume ``ID``. This will initiate a snapshot attach + and a snapshot detach on a temporary snapshot: + + .. code-block:: console + + openstack backup create --force VOLUME + +#. For example: + + .. code-block:: console + + openstack backup create --force cba1ca83-b857-421a-87c3-df81eb9ea8ab + +Use case 2 - Restore a backup of a volume +----------------------------------------- + +#. Restore the backup from Use case 1, where ``BACKUP_ID`` is the identifier of + the backup from Use case 1. + + .. code-block:: console + + openstack backup restore BACKUP_ID + +#. For example: + + .. code-block:: console + + openstack backup restore ec7e17ec-ae3c-4495-9ee6-7f45c9a89572 + +Once complete, launch the back up as an instance, and it should be a +bootable volume. + +Use case 3 - Create a backup of a snapshot +------------------------------------------ + +#. Create a volume. +#. Create a snapshot of the volume. +#. Create a backup of the snapshot, where ``VOLUME`` is the volume name or + volume ID, ``SNAPSHOT_ID`` is the ID of the volume's snapshot. This will + initiate a snapshot attach and a snapshot detach on the snapshot. + + .. code-block:: console + + openstack backup create [--snapshot SNAPSHOT_ID} VOLUME + +#. For example: + + .. code-block:: console + + openstack backup create --snapshot 6ab440c2-80ef-4f16-ac37-2d9db938732c 9fedfc4a-5f25-4fa1-8d8d-d5bec91f72e0 + +Use case 4 - Restore backup of a snapshot +----------------------------------------- + +#. Restore the backup where ``BACKUP_ID`` is the identifier of the backup from + Use case 3. + + .. code-block:: console + + openstack backup restore BACKUP_ID + +#. For example: + + .. code-block:: console + + openstack backup restore ec7e17ec-ae3c-4495-9ee6-7f45c9a89572 + + +All Flash compression support +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +On an All Flash array, the creation of any storage group has a compressed +attribute by default. Setting compression on a storage group does not mean +that all the devices will be immediately compressed. It means that for all +incoming writes compression will be considered. Setting compression ``off`` on +a storage group does not mean that all the devices will be uncompressed. +It means all the writes to compressed tracks will make these tracks +uncompressed. + +.. note:: + + This feature is only applicable for All Flash arrays, 250F, 450F or 850F. + +Use case 1 - Compression disabled create, attach, detach, and delete volume +--------------------------------------------------------------------------- + +#. Create a new volume type called ``VMAX_COMPRESSION_DISABLED``. +#. Set an extra spec ``volume_backend_name``. +#. Set a new extra spec ``storagetype:disablecompression = True``. +#. Create a new volume. +#. Check in Unisphere or symcli to see if the volume + exists in storage group ``OS----CD-SG``, and + compression is disabled on that storage group. +#. Attach the volume to an instance. Check in Unisphere or symcli to see if the + volume exists in storage group + ``OS-----CD-SG``, and + compression is disabled on that storage group. +#. Detach volume from instance. Check in Unisphere or symcli to see if the + volume exists in storage group ``OS----CD-SG``, + and compression is disabled on that storage group. +#. Delete the volume. If this was the last volume in the + ``OS----CD-SG`` storage group, + it should also be deleted. + + +Use case 2 - Compression disabled create, delete snapshot and delete volume +--------------------------------------------------------------------------- + +#. Repeat steps 1-5 of Use case 1. +#. Create a snapshot. The volume should now exist in + ``OS----CD-SG``. +#. Delete the snapshot. The volume should be removed from + ``OS----CD-SG``. +#. Delete the volume. If this volume is the last volume in + ``OS----CD-SG``, it should also be deleted. + +Use case 3 - Retype from compression disabled to compression enabled +-------------------------------------------------------------------- + +#. Repeat steps 1-4 of Use case 1. +#. Create a new volume type. For example ``VMAX_COMPRESSION_ENABLED``. +#. Set extra spec ``volume_backend_name`` as before. +#. Set the new extra spec's compression as + ``storagetype:disablecompression = False`` or DO NOT set this extra spec. +#. Retype from volume type ``VMAX_COMPRESSION_DISABLED`` to + ``VMAX_COMPRESSION_ENABLED``. +#. Check in Unisphere or symcli to see if the volume exists in storage group + ``OS----SG``, and compression is enabled on + that storage group. + +.. note:: + If extra spec ``storagetype:disablecompression`` is set on a hybrid, it is + ignored because compression is not a feature on a VMAX3 hybrid. + + +Volume replication support +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the source and target arrays +-------------------------------------- + +#. Configure a synchronous SRDF group between the chosen source and target + arrays for the VMAX cinder driver to use. The source array must correspond + with the ```` entry in the VMAX XML file. +#. Select both the director and the ports for the SRDF emulation to use on + both sides. Bear in mind that network topology is important when choosing + director endpoints. Currently, the only supported mode is `Synchronous`. + + .. note:: + For full failover functionality, the source and target VMAX arrays must be + discovered and managed by the same SMI-S/ECOM server, locally connected + for example. This SMI-S/ ECOM server cannot be embedded - it can be + installed on a physical server or a VM hosted by an ESX server only. + + .. note:: + With both arrays being managed by the one SMI-S server, it is the cloud + storage administrators responsibility to account for a DR scenario where the + management (SMI-S) server goes down as well as the primary array. In that + event, the details and credentials of a back-up SMI-S server can be passed + in to the XML file, and the VMAX cinder driver can be rebooted. It would be + advisable to have the SMI-S server at a third location (separate from both + arrays) if possible. + + .. note:: + If the source and target arrays are not managed by the same management + server (that is, the target array is remotely connected to server), in the + event of a full disaster scenario (for example, the primary array is + completely lost and all connectivity to it is gone), the SMI-S server + would no longer be able to contact the target array. In this scenario, + the volumes would be automatically failed over to the target array, but + administrator intervention would be required to either; configure the + target (remote) array as local to the current SMI-S server, or enter + the details to the XML file of a second SMI-S server, which is locally + connected to the target array, and restart the cinder volume service. + +#. Enable replication in ``/etc/cinder/cinder.conf``. + To enable the replication functionality in VMAX cinder driver, it is + necessary to create a replication volume-type. The corresponding + back-end stanza in the ``cinder.conf`` for this volume-type must then + include a ``replication_device`` parameter. This parameter defines a + single replication target array and takes the form of a list of key + value pairs. + + .. code-block:: console + + enabled_backends = VMAX_FC_REPLICATION + [VMAX_FC_REPLICATION] + volume_driver = cinder.volume.drivers.emc.emc_vmax_FC.EMCVMAXFCDriver + cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_FC_REPLICATION.xml + volume_backend_name = VMAX_FC_REPLICATION + replication_device = target_device_id:000197811111, remote_port_group:os-failover-pg, remote_pool:SRP_1, rdf_group_label: 28_11_07, allow_extend:False + + * ``target_device_id`` is a unique VMAX array serial number of the target + array. For full failover functionality, the source and target VMAX arrays + must be discovered and managed by the same SMI-S/ECOM server. + That is, locally connected. Follow the instructions in the SMI-S release + notes. + + * ``remote_port_group`` is the name of a VMAX port group that has been + pre-configured to expose volumes managed by this backend in the event + of a failover. Make sure that this portgroup contains either all FC or + all iSCSI port groups (for a given back end), as appropriate for the + configured driver (iSCSI or FC). + * ``remote_pool`` is the unique pool name for the given target array. + * ``rdf_group_label`` is the name of a VMAX SRDF group (Synchronous) that + has been pre-configured between the source and target arrays. + * ``allow_extend`` is a flag for allowing the extension of replicated volumes. + To extend a volume in an SRDF relationship, this relationship must first be + broken, both the source and target volumes are then independently extended, + and then the replication relationship is re-established. As the SRDF link + must be severed, due caution should be exercised when performing this + operation. If not explicitly set, this flag defaults to ``False``. + + .. note:: + Service Level and Workload: An attempt will be made to create a storage + group on the target array with the same service level and workload combination + as the primary. However, if this combination is unavailable on the target + (for example, in a situation where the source array is a Hybrid, the target array + is an All Flash, and an All Flash incompatible SLO like Bronze is + configured), no SLO will be applied. + + .. note:: + The VMAX cinder drivers can support a single replication target per + back-end, that is we do not support Concurrent SRDF or Cascaded SRDF. + Ensure there is only a single ``.replication_device.`` entry per + back-end stanza. + +#. Create a ``replication-enabled`` volume type. Once the + ``replication_device`` parameter has been entered in the VMAX + backend entry in the ``cinder.conf``, a corresponding volume type + needs to be created ``replication_enabled`` property set. See + above ``Setup VMAX drivers`` for details. + + .. code-block:: console + + $ openstack volume type set --property replication_enabled = `` True`` VMAX_FC_REPLICATION + + +Volume replication interoperability with other features +------------------------------------------------------- + +Most features are supported, except for the following: + +* There is no OpenStack Consistency Group or Generic Volume Group support + for replication-enabled VMAX volumes. + +* Storage-assisted retype operations on replication-enabled VMAX volumes + (moving from a non-replicated type to a replicated-type and vice-versa. + Moving to another SLO/workload combination, for example) are not supported. + +* The image volume cache functionality is supported (enabled by setting + ``image_volume_cache_enabled = True``), but one of two actions must be taken + when creating the cached volume: + + * The first boot volume created on a backend (which will trigger the + cached volume to be created) should be the smallest necessary size. + For example, if the minimum size disk to hold an image is 5GB, create + the first boot volume as 5GB. + * Alternatively, ensure that the ``allow_extend`` option in the + ``replication_device parameter`` is set to ``True``. + + This is because the initial boot volume is created at the minimum required + size for the requested image, and then extended to the user specified size. + + +Failover host +------------- + +In the event of a disaster, or where there is required downtime, upgrade +of the primary array for example, the administrator can issue the failover +host command to failover to the configured target: + +.. code-block:: console + + $ cinder failover-host cinder_host@VMAX_FC_REPLICATION#Diamond+SRP_1+000192800111 + +If the primary array becomes available again, you can initiate a failback +using the same command and specifying ``--backend_id default``: + +.. code-block:: console + + $ cinder failover-host \ + cinder_host@VMAX_FC_REPLICATION#Diamond+SRP_1+000192800111 \ + --backend_id default + + +Volume retype - storage assisted volume migration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Volume retype with storage assisted migration is supported now for +VMAX3 arrays. Cinder requires that for storage assisted migration, a +volume cannot be retyped across backends. For using storage assisted volume +retype, follow these steps: + +#. Add the parameter ``multi_pool_support`` to the configuration group in the + ``/etc/cinder/cinder.conf`` file and set it to ``True``. + + .. code-block:: console + + [CONF_GROUP_FC] + volume_driver = cinder.volume.drivers.dell_emc.vmax.fc.EMCVMAXFCDriver + cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml + volume_backend_name = FC_backend + multi_pool_support = True + +#. Configure a single backend per SRP for the ``VMAX`` (Only VMAX3 arrays). + This is different from the regular configuration where one backend is + configured per service level. + +#. Create the ``/etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml`` and add + the following lines to the XML for VMAX All Flash and Hybrid. + + .. code-block:: console + + + + 1.1.1.1 + 00 + user1 + password1 + + OS-PORTGROUP1-PG + OS-PORTGROUP2-PG + + 111111111111 + SRP_1 + + + .. note:: + There is no need to specify the Service Level and Workload in the XML + file. A single XML file corresponding to the backend is sufficient + instead of creating one each for the desired Service Level and Workload + combination. + +#. Once the backend is configured in the ``cinder.conf`` file and the VMAX + specific configuration XML created, restart the cinder volume service for + the changes to take place. + +#. Run the command ``cinder get-pools --detail`` to query for the pool + information. This should list all the available Service Level and Workload + combinations available for the SRP as pools belonging to the same backend. + +#. Use the following examples of OpenStack commands to create various volume + types. The below example demonstrates creating a volume type for Diamond + Service Level and OLTP workload. + + .. code-block:: console + + $ openstack volume type create VMAX_FC_DIAMOND_OLTP + $ openstack volume type set --property volume_backend_name=FC_backend VMAX_FC_DIAMOND_OLTP + $ openstack volume type set --property pool_name=Diamond+OLTP+SRP_1+111111111111 + + .. note:: + Create as many volume types as the number of Service Level and Workload + (available) combinations which you are going to use for provisioning + volumes. The ``pool_name`` is the additional property which has to be set + and is of the format: ``+++``. + This can be obtained from the output of the ``cinder get-pools --detail``. + +#. For migrating a volume from one Service Level or Workload combination to + another, use volume retype with the migration-policy to on-demand. The + target volume type should have the same ``volume_backend_name`` configured + and should have the desired ``pool_name`` to which you are trying to retype + to. + + .. code-block:: console + + $ cinder retype --migration-policy on-demand diff --git a/doc/source/config-reference/block-storage/drivers/emc-vnx-driver.rst b/doc/source/config-reference/block-storage/drivers/emc-vnx-driver.rst new file mode 100644 index 00000000000..855c836c863 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/emc-vnx-driver.rst @@ -0,0 +1,1121 @@ +=================== +Dell EMC VNX driver +=================== + +EMC VNX driver interacts with configured VNX array. It supports +both iSCSI and FC protocol. + +The VNX cinder driver performs the volume operations by +executing Navisphere CLI (NaviSecCLI) which is a command-line interface used +for management, diagnostics, and reporting functions for VNX. It also +supports both iSCSI and FC protocol. + + +System requirements +~~~~~~~~~~~~~~~~~~~ + +- VNX Operational Environment for Block version 5.32 or higher. +- VNX Snapshot and Thin Provisioning license should be activated for VNX. +- Python library ``storops`` to interact with VNX. +- Navisphere CLI v7.32 or higher is installed along with the driver. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Clone a volume. +- Extend a volume. +- Migrate a volume. +- Retype a volume. +- Get volume statistics. +- Create and delete consistency groups. +- Create, list, and delete consistency group snapshots. +- Modify consistency groups. +- Efficient non-disruptive volume backup. +- Create a cloned consistency group. +- Create a consistency group from consistency group snapshots. +- Replication v2.1 support. +- Generic Group support. + +Preparation +~~~~~~~~~~~ + +This section contains instructions to prepare the Block Storage nodes to +use the EMC VNX driver. You should install the Navisphere CLI and ensure you +have correct zoning configurations. + +Install Navisphere CLI +---------------------- + +Navisphere CLI needs to be installed on all Block Storage nodes within +an OpenStack deployment. You need to download different versions for +different platforms: + +- For Ubuntu x64, DEB is available at `EMC OpenStack + Github `_. + +- For all other variants of Linux, Navisphere CLI is available at + `Downloads for VNX2 + Series `_ or + `Downloads for VNX1 + Series `_. + +Install Python library storops +------------------------------ + +``storops`` is a Python library that interacts with VNX array through +Navisphere CLI. +Use the following command to install the ``storops`` library: + +.. code-block:: console + + $ pip install storops + + +Check array software +-------------------- + +Make sure your have the following software installed for certain features: + ++--------------------------------------------+---------------------+ +| Feature | Software Required | ++============================================+=====================+ +| All | ThinProvisioning | ++--------------------------------------------+---------------------+ +| All | VNXSnapshots | ++--------------------------------------------+---------------------+ +| FAST cache support | FASTCache | ++--------------------------------------------+---------------------+ +| Create volume with type ``compressed`` | Compression | ++--------------------------------------------+---------------------+ +| Create volume with type ``deduplicated`` | Deduplication | ++--------------------------------------------+---------------------+ + +**Required software** + +You can check the status of your array software in the :guilabel:`Software` +page of :guilabel:`Storage System Properties`. Here is how it looks like: + +.. figure:: ../../figures/emc-enabler.png + +Network configuration +--------------------- + +For the FC Driver, FC zoning is properly configured between the hosts and +the VNX. Check :ref:`register-fc-port-with-vnx` for reference. + +For the iSCSI Driver, make sure your VNX iSCSI port is accessible by +your hosts. Check :ref:`register-iscsi-port-with-vnx` for reference. + +You can use ``initiator_auto_registration = True`` configuration to avoid +registering the ports manually. Check the detail of the configuration in +:ref:`emc-vnx-conf` for reference. + +If you are trying to setup multipath, refer to :ref:`multipath-setup`. + + +.. _emc-vnx-conf: + +Back-end configuration +~~~~~~~~~~~~~~~~~~~~~~ + + +Make the following changes in the ``/etc/cinder/cinder.conf`` file. + +Minimum configuration +--------------------- + +Here is a sample of minimum back-end configuration. See the following sections +for the detail of each option. +Set ``storage_protocol = iscsi`` if iSCSI protocol is used. + +.. code-block:: ini + + [DEFAULT] + enabled_backends = vnx_array1 + + [vnx_array1] + san_ip = 10.10.72.41 + san_login = sysadmin + san_password = sysadmin + naviseccli_path = /opt/Navisphere/bin/naviseccli + volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver + initiator_auto_registration = True + storage_protocol = fc + +Multiple back-end configuration +------------------------------- +Here is a sample of a minimum back-end configuration. See following sections +for the detail of each option. +Set ``storage_protocol = iscsi`` if iSCSI protocol is used. + +.. code-block:: ini + + [DEFAULT] + enabled_backends = backendA, backendB + + [backendA] + storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH + san_ip = 10.10.72.41 + storage_vnx_security_file_dir = /etc/secfile/array1 + naviseccli_path = /opt/Navisphere/bin/naviseccli + volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver + initiator_auto_registration = True + storage_protocol = fc + + [backendB] + storage_vnx_pool_names = Pool_02_SAS + san_ip = 10.10.26.101 + san_login = username + san_password = password + naviseccli_path = /opt/Navisphere/bin/naviseccli + volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver + initiator_auto_registration = True + storage_protocol = fc + +The value of the option ``storage_protocol`` can be either ``fc`` or ``iscsi``, +which is case insensitive. + +For more details on multiple back ends, see `Configure multiple-storage +back ends `_ + +Required configurations +----------------------- + +**IP of the VNX Storage Processors** + +Specify SP A or SP B IP to connect: + +.. code-block:: ini + + san_ip = + +**VNX login credentials** + +There are two ways to specify the credentials. + +- Use plain text username and password. + + Supply for plain username and password: + + .. code-block:: ini + + san_login = + san_password = + storage_vnx_authentication_type = global + + Valid values for ``storage_vnx_authentication_type`` are: ``global`` + (default), ``local``, and ``ldap``. + +- Use Security file. + + This approach avoids the plain text password in your cinder + configuration file. Supply a security file as below: + + .. code-block:: ini + + storage_vnx_security_file_dir = + +Check Unisphere CLI user guide or :ref:`authenticate-by-security-file` +for how to create a security file. + +**Path to your Unisphere CLI** + +Specify the absolute path to your naviseccli: + +.. code-block:: ini + + naviseccli_path = /opt/Navisphere/bin/naviseccli + +**Driver's storage protocol** + +- For the FC Driver, add the following option: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver + storage_protocol = fc + +- For iSCSI Driver, add the following option: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver + storage_protocol = iscsi + +Optional configurations +~~~~~~~~~~~~~~~~~~~~~~~ + +VNX pool names +-------------- + +Specify the list of pools to be managed, separated by commas. They should +already exist in VNX. + +.. code-block:: ini + + storage_vnx_pool_names = pool 1, pool 2 + +If this value is not specified, all pools of the array will be used. + +**Initiator auto registration** + +When ``initiator_auto_registration`` is set to ``True``, the driver will +automatically register initiators to all working target ports of the VNX array +during volume attaching (The driver will skip those initiators that have +already been registered) if the option ``io_port_list`` is not specified in +the ``cinder.conf`` file. + +If the user wants to register the initiators with some specific ports but not +register with the other ports, this functionality should be disabled. + +When a comma-separated list is given to ``io_port_list``, the driver will only +register the initiator to the ports specified in the list and only return +target port(s) which belong to the target ports in the ``io_port_list`` instead +of all target ports. + +- Example for FC ports: + + .. code-block:: ini + + io_port_list = a-1,B-3 + + ``a`` or ``B`` is *Storage Processor*, number ``1`` and ``3`` are + *Port ID*. + +- Example for iSCSI ports: + + .. code-block:: ini + + io_port_list = a-1-0,B-3-0 + + ``a`` or ``B`` is *Storage Processor*, the first numbers ``1`` and ``3`` are + *Port ID* and the second number ``0`` is *Virtual Port ID* + +.. note:: + + - Rather than de-registered, the registered ports will be simply + bypassed whatever they are in ``io_port_list`` or not. + + - The driver will raise an exception if ports in ``io_port_list`` + do not exist in VNX during startup. + +Force delete volumes in storage group +------------------------------------- + +Some ``available`` volumes may remain in storage group on the VNX array due to +some OpenStack timeout issue. But the VNX array do not allow the user to delete +the volumes which are in storage group. Option +``force_delete_lun_in_storagegroup`` is introduced to allow the user to delete +the ``available`` volumes in this tricky situation. + +When ``force_delete_lun_in_storagegroup`` is set to ``True`` in the back-end +section, the driver will move the volumes out of the storage groups and then +delete them if the user tries to delete the volumes that remain in the storage +group on the VNX array. + +The default value of ``force_delete_lun_in_storagegroup`` is ``False``. + +Over subscription in thin provisioning +-------------------------------------- + +Over subscription allows that the sum of all volume's capacity (provisioned +capacity) to be larger than the pool's total capacity. + +``max_over_subscription_ratio`` in the back-end section is the ratio of +provisioned capacity over total capacity. + +The default value of ``max_over_subscription_ratio`` is 20.0, which means +the provisioned capacity can be 20 times of the total capacity. +If the value of this ratio is set larger than 1.0, the provisioned +capacity can exceed the total capacity. + +Storage group automatic deletion +-------------------------------- + +For volume attaching, the driver has a storage group on VNX for each compute +node hosting the vm instances which are going to consume VNX Block Storage +(using compute node's host name as storage group's name). All the volumes +attached to the VM instances in a compute node will be put into the storage +group. If ``destroy_empty_storage_group`` is set to ``True``, the driver will +remove the empty storage group after its last volume is detached. For data +safety, it does not suggest to set ``destroy_empty_storage_group=True`` unless +the VNX is exclusively managed by one Block Storage node because consistent +``lock_path`` is required for operation synchronization for this behavior. + +Initiator auto deregistration +----------------------------- + +Enabling storage group automatic deletion is the precondition of this function. +If ``initiator_auto_deregistration`` is set to ``True`` is set, the driver will +deregister all FC and iSCSI initiators of the host after its storage group is +deleted. + +FC SAN auto zoning +------------------ + +The EMC VNX driver supports FC SAN auto zoning when ``ZoneManager`` is +configured and ``zoning_mode`` is set to ``fabric`` in ``cinder.conf``. +For ZoneManager configuration, refer to :doc:`../fc-zoning`. + +Volume number threshold +----------------------- + +In VNX, there is a limitation on the number of pool volumes that can be created +in the system. When the limitation is reached, no more pool volumes can be +created even if there is remaining capacity in the storage pool. In other +words, if the scheduler dispatches a volume creation request to a back end that +has free capacity but reaches the volume limitation, the creation fails. + +The default value of ``check_max_pool_luns_threshold`` is ``False``. When +``check_max_pool_luns_threshold=True``, the pool-based back end will check the +limit and will report 0 free capacity to the scheduler if the limit is reached. +So the scheduler will be able to skip this kind of pool-based back end that +runs out of the pool volume number. + +iSCSI initiators +---------------- + +``iscsi_initiators`` is a dictionary of IP addresses of the iSCSI +initiator ports on OpenStack compute and block storage nodes which want to +connect to VNX via iSCSI. If this option is configured, the driver will +leverage this information to find an accessible iSCSI target portal for the +initiator when attaching volume. Otherwise, the iSCSI target portal will be +chosen in a relative random way. + +.. note:: + + This option is only valid for iSCSI driver. + +Here is an example. VNX will connect ``host1`` with ``10.0.0.1`` and +``10.0.0.2``. And it will connect ``host2`` with ``10.0.0.3``. + +The key name (``host1`` in the example) should be the output of +:command:`hostname` command. + +.. code-block:: ini + + iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]} + +Default timeout +--------------- + +Specify the timeout in minutes for operations like LUN migration, LUN creation, +etc. For example, LUN migration is a typical long running operation, which +depends on the LUN size and the load of the array. An upper bound in the +specific deployment can be set to avoid unnecessary long wait. + +The default value for this option is ``infinite``. + +.. code-block:: ini + + default_timeout = 60 + +Max LUNs per storage group +-------------------------- + +The ``max_luns_per_storage_group`` specify the maximum number of LUNs in a +storage group. Default value is 255. It is also the maximum value supported by +VNX. + +Ignore pool full threshold +-------------------------- + +If ``ignore_pool_full_threshold`` is set to ``True``, driver will force LUN +creation even if the full threshold of pool is reached. Default to ``False``. + +Extra spec options +~~~~~~~~~~~~~~~~~~ + +Extra specs are used in volume types created in Block Storage as the preferred +property of the volume. + +The Block Storage scheduler will use extra specs to find the suitable back end +for the volume and the Block Storage driver will create the volume based on the +properties specified by the extra spec. + +Use the following command to create a volume type: + +.. code-block:: console + + $ openstack volume type create demoVolumeType + +Use the following command to update the extra spec of a volume type: + +.. code-block:: console + + $ openstack volume type set --property provisioning:type=thin thick_provisioning_support=' True' demoVolumeType + +The following sections describe the VNX extra keys. + +Provisioning type +----------------- + +- Key: ``provisioning:type`` + +- Possible Values: + + - ``thick`` + + Volume is fully provisioned. + + Run the following commands to create a ``thick`` volume type: + + .. code-block:: console + + $ openstack volume type create ThickVolumeType + $ openstack volume type set --property provisioning:type=thick thick_provisioning_support=' True' ThickVolumeType + + - ``thin`` + + Volume is virtually provisioned. + + Run the following commands to create a ``thin`` volume type: + + .. code-block:: console + + $ openstack volume type create ThinVolumeType + $ openstack volume type set --property provisioning:type=thin thin_provisioning_support=' True' ThinVolumeType + + - ``deduplicated`` + + Volume is ``thin`` and deduplication is enabled. The administrator shall + go to VNX to configure the system level deduplication settings. To + create a deduplicated volume, the VNX Deduplication license must be + activated on VNX, and specify ``deduplication_support=True`` to let Block + Storage scheduler find the proper volume back end. + + Run the following commands to create a ``deduplicated`` volume type: + + .. code-block:: console + + $ openstack volume type create DeduplicatedVolumeType + $ openstack volume type set --property provisioning:type=deduplicated deduplicated_support=' True' DeduplicatedVolumeType + + - ``compressed`` + + Volume is ``thin`` and compression is enabled. The administrator shall go + to the VNX to configure the system level compression settings. To create + a compressed volume, the VNX Compression license must be activated on + VNX, and use ``compression_support=True`` to let Block Storage scheduler + find a volume back end. VNX does not support creating snapshots on a + compressed volume. + + Run the following commands to create a ``compressed`` volume type: + + .. code-block:: console + + $ openstack volume type create CompressedVolumeType + $ openstack volume type set --property provisioning:type=compressed compression_support=' True' CompressedVolumeType + +- Default: ``thick`` + +.. note:: + + ``provisioning:type`` replaces the old spec key ``storagetype:provisioning``. + The latter one is obsolete since the *Mitaka* release. + +Storage tiering support +----------------------- + +- Key: ``storagetype:tiering`` +- Possible values: + + - ``StartHighThenAuto`` + - ``Auto`` + - ``HighestAvailable`` + - ``LowestAvailable`` + - ``NoMovement`` + +- Default: ``StartHighThenAuto`` + +VNX supports fully automated storage tiering which requires the FAST license +activated on the VNX. The OpenStack administrator can use the extra spec key +``storagetype:tiering`` to set the tiering policy of a volume and use the key +``fast_support=' True'`` to let Block Storage scheduler find a volume back +end which manages a VNX with FAST license activated. Here are the five +supported values for the extra spec key ``storagetype:tiering``: + +Run the following commands to create a volume type with tiering policy: + +.. code-block:: console + + $ openstack volume type create ThinVolumeOnAutoTier + $ openstack volume type set --property provisioning:type=thin storagetype:tiering=Auto fast_support=' True' ThinVolumeOnAutoTier + +.. note:: + + The tiering policy cannot be applied to a deduplicated volume. Tiering + policy of the deduplicated LUN align with the settings of the pool. + +FAST cache support +------------------ + +- Key: ``fast_cache_enabled`` + +- Possible values: + + - ``True`` + + - ``False`` + +- Default: ``False`` + +VNX has FAST Cache feature which requires the FAST Cache license activated on +the VNX. Volume will be created on the backend with FAST cache enabled when +`` True`` is specified. + +Pool name +--------- + +- Key: ``pool_name`` + +- Possible values: name of the storage pool managed by cinder + +- Default: None + +If the user wants to create a volume on a certain storage pool in a back end +that manages multiple pools, a volume type with a extra spec specified storage +pool should be created first, then the user can use this volume type to create +the volume. + +Run the following commands to create the volume type: + +.. code-block:: console + + $ openstack volume type create HighPerf + $ openstack volume type set --property pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41 HighPerf + +Obsolete extra specs +-------------------- + +.. note:: + + *DO NOT* use the following obsolete extra spec keys: + + - ``storagetype:provisioning`` + - ``storagetype:pool`` + + +Advanced features +~~~~~~~~~~~~~~~~~ + +Snap copy +--------- + +- Metadata Key: ``snapcopy`` +- Possible Values: + + - ``True`` or ``true`` + - ``False`` or ``false`` + +- Default: `False` + +VNX driver supports snap copy which accelerates the process for +creating a copied volume. + +By default, the driver will use `asynchronous migration support`_, which will +start a VNX migration session. When snap copy is used, driver creates a +snapshot and mounts it as a volume for the 2 kinds of operations which will be +instant even for large volumes. + +To enable this functionality, append ``--metadata snapcopy=True`` +when creating cloned volume or creating volume from snapshot. + +.. code-block:: console + + $ cinder create --source-volid --name "cloned_volume" --metadata snapcopy=True + +Or + +.. code-block:: console + + $ cinder create --snapshot-id --name "vol_from_snapshot" --metadata snapcopy=True + + +The newly created volume is a snap copy instead of +a full copy. If a full copy is needed, retype or migrate can be used +to convert the snap-copy volume to a full-copy volume which may be +time-consuming. + +You can determine whether the volume is a snap-copy volume or not by +showing its metadata. If the ``snapcopy`` in metadata is ``True`` or ``true``, +the volume is a snap-copy volume. Otherwise, it is a full-copy volume. + +.. code-block:: console + + $ cinder metadata-show + +**Constraints** + +- The number of snap-copy volumes created from a single source volume is + limited to 255 at one point in time. +- The source volume which has snap-copy volume can not be deleted or migrated. +- snapcopy volume will be change to full-copy volume after host-assisted or + storage-assisted migration. +- snapcopy volume can not be added to consisgroup because of VNX limitation. + +Efficient non-disruptive volume backup +-------------------------------------- + +The default implementation in Block Storage for non-disruptive volume backup is +not efficient since a cloned volume will be created during backup. + +The approach of efficient backup is to create a snapshot for the volume and +connect this snapshot (a mount point in VNX) to the Block Storage host for +volume backup. This eliminates migration time involved in volume clone. + +**Constraints** + +- Backup creation for a snap-copy volume is not allowed if the volume + status is ``in-use`` since snapshot cannot be taken from this volume. + +Configurable migration rate +--------------------------- + +VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration +is involved in cloning, migrating, retyping, and creating volume from snapshot. +When admin set ``migrate_rate`` in volume's ``metadata``, VNX driver can start +migration with specified rate. The available values for the ``migrate_rate`` +are ``high``, ``asap``, ``low`` and ``medium``. + +The following is an example to set ``migrate_rate`` to ``asap``: + +.. code-block:: console + + $ cinder metadata set migrate_rate=asap + +After set, any cinder volume operations involving VNX LUN migration will +take the value as the migration rate. To restore the migration rate to +default, unset the metadata as following: + +.. code-block:: console + + $ cinder metadata unset migrate_rate + +.. note:: + + Do not use the ``asap`` migration rate when the system is in production, as the normal + host I/O may be interrupted. Use asap only when the system is offline + (free of any host-level I/O). + +Replication v2.1 support +------------------------ + +Cinder introduces Replication v2.1 support in Mitaka, it supports +fail-over and fail-back replication for specific back end. In VNX cinder +driver, **MirrorView** is used to set up replication for the volume. + +To enable this feature, you need to set configuration in ``cinder.conf`` as +below: + +.. code-block:: ini + + replication_device = backend_id:, + san_ip:192.168.1.2, + san_login:admin, + san_password:admin, + naviseccli_path:/opt/Navisphere/bin/naviseccli, + storage_vnx_authentication_type:global, + storage_vnx_security_file_dir: + +Currently, only synchronized mode **MirrorView** is supported, and one volume +can only have 1 secondary storage system. Therefore, you can have only one +``replication_device`` presented in driver configuration section. + +To create a replication enabled volume, you need to create a volume type: + +.. code-block:: console + + $ openstack volume type create replication-type + $ openstack volume type set --property replication_enabled=" True" replication-type + +And then create volume with above volume type: + +.. code-block:: console + + $ openstack volume create replication-volume --type replication-type --size 1 + +**Supported operations** + +- Create volume +- Create cloned volume +- Create volume from snapshot +- Fail-over volume: + + .. code-block:: console + + $ cinder failover-host --backend_id + +- Fail-back volume: + + .. code-block:: console + + $ cinder failover-host --backend_id default + +**Requirements** + +- 2 VNX systems must be in same domain. +- For iSCSI MirrorView, user needs to setup iSCSI connection before enable + replication in Cinder. +- For FC MirrorView, user needs to zone specific FC ports from 2 + VNX system together. +- MirrorView Sync enabler( **MirrorView/S** ) installed on both systems. +- Write intent log enabled on both VNX systems. + +For more information on how to configure, please refer to: `MirrorView-Knowledgebook:-Releases-30-–-33 `_ + +Asynchronous migration support +------------------------------ + +VNX Cinder driver now supports asynchronous migration during volume cloning. + +The driver now using asynchronous migration when creating a volume from source +as the default cloning method. The driver will return immediately after the +migration session starts on the VNX, which dramatically reduces the time before +a volume is available for use. + +To disable this feature, user can add ``--metadata async_migrate=False`` when +creating new volume from source. + + +Best practice +~~~~~~~~~~~~~ + +.. _multipath-setup: + +Multipath setup +--------------- + +Enabling multipath volume access is recommended for robust data access. +The major configuration includes: + +#. Install ``multipath-tools``, ``sysfsutils`` and ``sg3-utils`` on the + nodes hosting compute and ``cinder-volume`` services. Check + the operating system manual for the system distribution for specific + installation steps. For Red Hat based distributions, they should be + ``device-mapper-multipath``, ``sysfsutils`` and ``sg3_utils``. + +#. Specify ``use_multipath_for_image_xfer=true`` in the ``cinder.conf`` file + for each FC/iSCSI back end. + +#. Specify ``iscsi_use_multipath=True`` in ``libvirt`` section of the + ``nova.conf`` file. This option is valid for both iSCSI and FC driver. + +For multipath-tools, here is an EMC recommended sample of +``/etc/multipath.conf`` file. + +``user_friendly_names`` is not specified in the configuration and thus +it will take the default value ``no``. It is not recommended to set it +to ``yes`` because it may fail operations such as VM live migration. + +.. code-block:: vim + + blacklist { + # Skip the files under /dev that are definitely not FC/iSCSI devices + # Different system may need different customization + devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" + devnode "^hd[a-z][0-9]*" + devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" + + # Skip LUNZ device from VNX + device { + vendor "DGC" + product "LUNZ" + } + } + + defaults { + user_friendly_names no + flush_on_last_del yes + } + + devices { + # Device attributed for EMC CLARiiON and VNX series ALUA + device { + vendor "DGC" + product ".*" + product_blacklist "LUNZ" + path_grouping_policy group_by_prio + path_selector "round-robin 0" + path_checker emc_clariion + features "1 queue_if_no_path" + hardware_handler "1 alua" + prio alua + failback immediate + } + } + +.. note:: + + When multipath is used in OpenStack, multipath faulty devices may + come out in Nova-Compute nodes due to different issues (`Bug + 1336683 `_ is a + typical example). + +A solution to completely avoid faulty devices has not been found yet. +``faulty_device_cleanup.py`` mitigates this issue when VNX iSCSI storage is +used. Cloud administrators can deploy the script in all Nova-Compute nodes and +use a CRON job to run the script on each Nova-Compute node periodically so that +faulty devices will not stay too long. Refer to: `VNX faulty device +cleanup `_ for +detailed usage and the script. + +Restrictions and limitations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +iSCSI port cache +---------------- + +EMC VNX iSCSI driver caches the iSCSI ports information, so that the user +should restart the ``cinder-volume`` service or wait for seconds (which is +configured by ``periodic_interval`` in the ``cinder.conf`` file) before any +volume attachment operation after changing the iSCSI port configurations. +Otherwise the attachment may fail because the old iSCSI port configurations +were used. + +No extending for volume with snapshots +-------------------------------------- + +VNX does not support extending the thick volume which has a snapshot. If the +user tries to extend a volume which has a snapshot, the status of the volume +would change to ``error_extending``. + +Limitations for deploying cinder on computer node +------------------------------------------------- + +It is not recommended to deploy the driver on a compute node if ``cinder +upload-to-image --force True`` is used against an in-use volume. Otherwise, +``cinder upload-to-image --force True`` will terminate the data access of the +vm instance to the volume. + +Storage group with host names in VNX +------------------------------------ + +When the driver notices that there is no existing storage group that has the +host name as the storage group name, it will create the storage group and also +add the compute node's or Block Storage node's registered initiators into the +storage group. + +If the driver notices that the storage group already exists, it will assume +that the registered initiators have also been put into it and skip the +operations above for better performance. + +It is recommended that the storage administrator does not create the storage +group manually and instead relies on the driver for the preparation. If the +storage administrator needs to create the storage group manually for some +special requirements, the correct registered initiators should be put into the +storage group as well (otherwise the following volume attaching operations will +fail). + +EMC storage-assisted volume migration +------------------------------------- + +EMC VNX driver supports storage-assisted volume migration, when the user starts +migrating with ``cinder migrate --force-host-copy False `` or +``cinder migrate ``, cinder will try to leverage the VNX's +native volume migration functionality. + +In following scenarios, VNX storage-assisted volume migration will not be +triggered: + +- ``in-use`` volume migration between back ends with different storage + protocol, for example, FC and iSCSI. +- Volume is to be migrated across arrays. + +Appendix +~~~~~~~~ + +.. _authenticate-by-security-file: + +Authenticate by security file +----------------------------- + +VNX credentials are necessary when the driver connects to the VNX system. +Credentials in ``global``, ``local`` and ``ldap`` scopes are supported. There +are two approaches to provide the credentials. + +The recommended one is using the Navisphere CLI security file to provide the +credentials which can get rid of providing the plain text credentials in the +configuration file. Following is the instruction on how to do this. + +#. Find out the Linux user id of the ``cinder-volume`` processes. Assuming the + ``cinder-volume`` service is running by the account ``cinder``. + +#. Run ``su`` as root user. + +#. In ``/etc/passwd`` file, change + ``cinder:x:113:120::/var/lib/cinder:/bin/false`` + to ``cinder:x:113:120::/var/lib/cinder:/bin/bash`` (This temporary change is + to make step 4 work.) + +#. Save the credentials on behalf of ``cinder`` user to a security file + (assuming the array credentials are ``admin/admin`` in ``global`` scope). In + the command below, the ``-secfilepath`` switch is used to specify the + location to save the security file. + + .. code-block:: console + + # su -l cinder -c \ + '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath ' + +#. Change ``cinder:x:113:120::/var/lib/cinder:/bin/bash`` back to + ``cinder:x:113:120::/var/lib/cinder:/bin/false`` in ``/etc/passwd`` file. + +#. Remove the credentials options ``san_login``, ``san_password`` and + ``storage_vnx_authentication_type`` from ``cinder.conf`` file. (normally + it is ``/etc/cinder/cinder.conf`` file). Add option + ``storage_vnx_security_file_dir`` and set its value to the directory path of + your security file generated in the above step. Omit this option if + ``-secfilepath`` is not used in the above step. + +#. Restart the ``cinder-volume`` service to validate the change. + + +.. _register-fc-port-with-vnx: + +Register FC port with VNX +------------------------- + +This configuration is only required when ``initiator_auto_registration=False``. + +To access VNX storage, the Compute nodes should be registered on VNX first if +initiator auto registration is not enabled. + +To perform ``Copy Image to Volume`` and ``Copy Volume to Image`` operations, +the nodes running the ``cinder-volume`` service (Block Storage nodes) must be +registered with the VNX as well. + +The steps mentioned below are for the compute nodes. Follow the same +steps for the Block Storage nodes also (The steps can be skipped if initiator +auto registration is enabled). + +#. Assume ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` is the WWN of a + FC initiator port name of the compute node whose host name and IP are + ``myhost1`` and ``10.10.61.1``. Register + ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` in Unisphere: + +#. Log in to :guilabel:`Unisphere`, go to + :menuselection:`FNM0000000000 > Hosts > Initiators`. + +#. Refresh and wait until the initiator + ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` with SP Port ``A-1`` + appears. + +#. Click the :guilabel:`Register` button, select :guilabel:`CLARiiON/VNX` + and enter the host name (which is the output of the :command:`hostname` + command) and IP address: + + - Hostname: ``myhost1`` + + - IP: ``10.10.61.1`` + + - Click :guilabel:`Register`. + +#. Then host ``10.10.61.1`` will appear under + :menuselection:`Hosts > Host List` as well. + +#. Register the ``wwn`` with more ports if needed. + +.. _register-iscsi-port-with-vnx: + +Register iSCSI port with VNX +---------------------------- + +This configuration is only required when ``initiator_auto_registration=False``. + +To access VNX storage, the compute nodes should be registered on VNX first if +initiator auto registration is not enabled. + +To perform ``Copy Image to Volume`` and ``Copy Volume to Image`` operations, +the nodes running the ``cinder-volume`` service (Block Storage nodes) must be +registered with the VNX as well. + +The steps mentioned below are for the compute nodes. Follow the +same steps for the Block Storage nodes also (The steps can be skipped if +initiator auto registration is enabled). + +#. On the compute node with IP address ``10.10.61.1`` and host name ``myhost1``, + execute the following commands (assuming ``10.10.61.35`` is the iSCSI + target): + + #. Start the iSCSI initiator service on the node: + + .. code-block:: console + + # /etc/init.d/open-iscsi start + + #. Discover the iSCSI target portals on VNX: + + .. code-block:: console + + # iscsiadm -m discovery -t st -p 10.10.61.35 + + #. Change directory to ``/etc/iscsi`` : + + .. code-block:: console + + # cd /etc/iscsi + + #. Find out the ``iqn`` of the node: + + .. code-block:: console + + # more initiatorname.iscsi + +#. Log in to :guilabel:`VNX` from the compute node using the target + corresponding to the SPA port: + + .. code-block:: console + + # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l + +#. Assume ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` is the initiator name of + the compute node. Register ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` in + Unisphere: + + #. Log in to :guilabel:`Unisphere`, go to + :menuselection:`FNM0000000000 > Hosts > Initiators`. + + #. Refresh and wait until the initiator + ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` with SP Port ``A-8v0`` + appears. + + #. Click the :guilabel:`Register` button, select :guilabel:`CLARiiON/VNX` + and enter the host name + (which is the output of the :command:`hostname` command) and IP address: + + - Hostname: ``myhost1`` + + - IP: ``10.10.61.1`` + + - Click :guilabel:`Register`. + + #. Then host ``10.10.61.1`` will appear under + :menuselection:`Hosts > Host List` as well. + +#. Log out :guilabel:`iSCSI` on the node: + + .. code-block:: console + + # iscsiadm -m node -u + +#. Log in to :guilabel:`VNX` from the compute node using the target + corresponding to the SPB port: + + .. code-block:: console + + # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l + +#. In ``Unisphere``, register the initiator with the SPB port. + +#. Log out :guilabel:`iSCSI` on the node: + + .. code-block:: console + + # iscsiadm -m node -u + +#. Register the ``iqn`` with more ports if needed. diff --git a/doc/source/config-reference/block-storage/drivers/emc-xtremio-driver.rst b/doc/source/config-reference/block-storage/drivers/emc-xtremio-driver.rst new file mode 100644 index 00000000000..bb394f9d425 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/emc-xtremio-driver.rst @@ -0,0 +1,251 @@ +============================================== +EMC XtremIO Block Storage driver configuration +============================================== + +The high performance XtremIO All Flash Array (AFA) offers Block Storage +services to OpenStack. Using the driver, OpenStack Block Storage hosts +can connect to an XtremIO Storage cluster. + +This section explains how to configure and connect the block +storage nodes to an XtremIO storage cluster. + +Support matrix +~~~~~~~~~~~~~~ + +XtremIO version 4.x is supported. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, clone, attach, and detach volumes. + +- Create and delete volume snapshots. + +- Create a volume from a snapshot. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Extend a volume. + +- Manage and unmanage a volume. + +- Manage and unmanage a snapshot. + +- Get volume statistics. + +- Create, modify, delete, and list consistency groups. + +- Create, modify, delete, and list snapshots of consistency groups. + +- Create consistency group from consistency group or consistency group + snapshot. + +- Volume Migration (host assisted) + +XtremIO Block Storage driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Edit the ``cinder.conf`` file by adding the configuration below under +the [DEFAULT] section of the file in case of a single back end or +under a separate section in case of multiple back ends (for example +[XTREMIO]). The configuration file is usually located under the +following path ``/etc/cinder/cinder.conf``. + +.. include:: ../../tables/cinder-emc_xtremio.rst + +For a configuration example, refer to the configuration +:ref:`emc_extremio_configuration_example`. + +XtremIO driver name +------------------- + +Configure the driver name by setting the following parameter in the +``cinder.conf`` file: + +- For iSCSI: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver + +- For Fibre Channel: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver + +XtremIO management server (XMS) IP +---------------------------------- + +To retrieve the management IP, use the :command:`show-xms` CLI command. + +Configure the management IP by adding the following parameter: + +.. code-block:: ini + + san_ip = XMS Management IP + +XtremIO cluster name +-------------------- + +In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In +such setups, the administrator is required to specify the cluster name (in +addition to the XMS IP). Each cluster must be defined as a separate back end. + +To retrieve the cluster name, run the :command:`show-clusters` CLI command. + +Configure the cluster name by adding the following parameter: + +.. code-block:: ini + + xtremio_cluster_name = Cluster-Name + +.. note:: + + When a single cluster is managed in XtremIO version 4.0, the cluster name is + not required. + +XtremIO user credentials +------------------------ + +OpenStack Block Storage requires an XtremIO XMS user with administrative +privileges. XtremIO recommends creating a dedicated OpenStack user account that +holds an administrative user role. + +Refer to the XtremIO User Guide for details on user account management. + +Create an XMS account using either the XMS GUI or the +:command:`add-user-account` CLI command. + +Configure the user credentials by adding the following parameters: + +.. code-block:: ini + + san_login = XMS username + san_password = XMS username password + +Multiple back ends +~~~~~~~~~~~~~~~~~~ + +Configuring multiple storage back ends enables you to create several back-end +storage solutions that serve the same OpenStack Compute resources. + +When a volume is created, the scheduler selects the appropriate back end to +handle the request, according to the specified volume type. + +Setting thin provisioning and multipathing parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To support thin provisioning and multipathing in the XtremIO Array, the +following parameters from the Nova and Cinder configuration files should be +modified as follows: + +- Thin Provisioning + + All XtremIO volumes are thin provisioned. The default value of 20 should be + maintained for the ``max_over_subscription_ratio`` parameter. + + The ``use_cow_images`` parameter in the ``nova.conf`` file should be set to + ``False`` as follows: + + .. code-block:: ini + + use_cow_images = False + +- Multipathing + + The ``use_multipath_for_image_xfer`` parameter in the ``cinder.conf`` file + should be set to ``True`` as follows: + + .. code-block:: ini + + use_multipath_for_image_xfer = True + + +Image service optimization +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Limit the number of copies (XtremIO snapshots) taken from each image cache. + +.. code-block:: ini + + xtremio_volumes_per_glance_cache = 100 + +The default value is ``100``. A value of ``0`` ignores the limit and defers to +the array maximum as the effective limit. + +SSL certification +~~~~~~~~~~~~~~~~~ + +To enable SSL certificate validation, modify the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + driver_ssl_cert_verify = true + +By default, SSL certificate validation is disabled. + +To specify a non-default path to ``CA_Bundle`` file or directory with +certificates of trusted CAs: + + +.. code-block:: ini + + driver_ssl_cert_path = Certificate path + +Configuring CHAP +~~~~~~~~~~~~~~~~ + +The XtremIO Block Storage driver supports CHAP initiator authentication and +discovery. + +If CHAP initiator authentication is required, set the CHAP +Authentication mode to initiator. + +To set the CHAP initiator mode using CLI, run the following XMCLI command: + +.. code-block:: console + + $ modify-chap chap-authentication-mode=initiator + +If CHAP initiator discovery is required, set the CHAP discovery mode to +initiator. + +To set the CHAP initiator discovery mode using CLI, run the following XMCLI +command: + +.. code-block:: console + + $ modify-chap chap-discovery-mode=initiator + +The CHAP initiator modes can also be set via the XMS GUI. + +Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI. + +The CHAP initiator authentication and discovery credentials (username and +password) are generated automatically by the Block Storage driver. Therefore, +there is no need to configure the initial CHAP credentials manually in XMS. + +.. _emc_extremio_configuration_example: + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +You can update the ``cinder.conf`` file by editing the necessary parameters as +follows: + +.. code-block:: ini + + [Default] + enabled_backends = XtremIO + + [XtremIO] + volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver + san_ip = XMS_IP + xtremio_cluster_name = Cluster01 + san_login = XMS_USER + san_password = XMS_PASSWD + volume_backend_name = XtremIOAFA diff --git a/doc/source/config-reference/block-storage/drivers/falconstor-fss-driver.rst b/doc/source/config-reference/block-storage/drivers/falconstor-fss-driver.rst new file mode 100644 index 00000000000..92aa9b524a7 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/falconstor-fss-driver.rst @@ -0,0 +1,117 @@ +======================================================= +FalconStor FSS Storage Fibre Channel and iSCSI drivers +======================================================= + +The ``FSSISCSIDriver`` and ``FSSFCDriver`` drivers run volume operations +by communicating with the FalconStor FSS storage system over HTTP. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the FalconStor FSS drivers, the following are required: + +- FalconStor FSS storage with: + + - iSCSI or FC host interfaces + + - FSS-8.00-8865 or later + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The FalconStor volume driver provides the following Cinder +volume operations: + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Clone a volume. + +* Extend a volume. + +* Get volume statistics. + +* Create and delete consistency group. + +* Create and delete consistency group snapshots. + +* Modify consistency groups. + +* Manage and unmanage a volume. + +iSCSI configuration +~~~~~~~~~~~~~~~~~~~ + +Use the following instructions to update the configuration file for iSCSI: + +.. code-block:: ini + + default_volume_type = FSS + enabled_backends = FSS + + [FSS] + + # IP address of FSS server + san_ip = 172.23.0.1 + # FSS server user name + san_login = Admin + # FSS server password + san_password = secret + # FSS server storage pool id list + fss_pools=P:2,O:3 + # Name to give this storage back-end + volume_backend_name = FSSISCSIDriver + # The iSCSI driver to load + volume_driver = cinder.volume.drivers.falconstor.iscsi.FSSISCSIDriver + + + # ==Optional settings== + + # Enable FSS log message + fss_debug = true + # Enable FSS thin provision + san_thin_provision=true + +Fibre Channel configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the following instructions to update the configuration file for fibre +channel: + +.. code-block:: ini + + default_volume_type = FSSFC + enabled_backends = FSSFC + + [FSSFC] + # IP address of FSS server + san_ip = 172.23.0.2 + # FSS server user name + san_login = Admin + # FSS server password + san_password = secret + # FSS server storage pool id list + fss_pools=A:1 + # Name to give this storage back-end + volume_backend_name = FSSFCDriver + # The FC driver to load + volume_driver = cinder.volume.drivers.falconstor.fc.FSSFCDriver + + + # ==Optional settings== + + # Enable FSS log message + fss_debug = true + # Enable FSS thin provision + san_thin_provision=true + +Driver options +~~~~~~~~~~~~~~ + +The following table contains the configuration options specific to the +FalconStor FSS storage volume driver. + +.. include:: ../../tables/cinder-falconstor.rst diff --git a/doc/source/config-reference/block-storage/drivers/fujitsu-eternus-dx-driver.rst b/doc/source/config-reference/block-storage/drivers/fujitsu-eternus-dx-driver.rst new file mode 100644 index 00000000000..db1e6a46839 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/fujitsu-eternus-dx-driver.rst @@ -0,0 +1,225 @@ +========================= +Fujitsu ETERNUS DX driver +========================= + +Fujitsu ETERNUS DX driver provides FC and iSCSI support for +ETERNUS DX S3 series. + +The driver performs volume operations by communicating with +ETERNUS DX. It uses a CIM client in Python called PyWBEM +to perform CIM operations over HTTP. + +You can specify RAID Group and Thin Provisioning Pool (TPP) +in ETERNUS DX as a storage pool. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +Supported storages: + +* ETERNUS DX60 S3 +* ETERNUS DX100 S3/DX200 S3 +* ETERNUS DX500 S3/DX600 S3 +* ETERNUS DX8700 S3/DX8900 S3 +* ETERNUS DX200F + +Requirements: + +* Firmware version V10L30 or later is required. +* The multipath environment with ETERNUS Multipath Driver is unsupported. +* An Advanced Copy Feature license is required + to create a snapshot and a clone. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. +* Create, list, and delete volume snapshots. +* Create a volume from a snapshot. +* Copy an image to a volume. +* Copy a volume to an image. +* Clone a volume. +* Extend a volume. (\*1) +* Get volume statistics. + +(\*1): It is executable only when you use TPP as a storage pool. + +Preparation +~~~~~~~~~~~ + +Package installation +-------------------- + +Install the ``python-pywbem`` package for your distribution. + +ETERNUS DX setup +---------------- + +Perform the following steps using ETERNUS Web GUI or ETERNUS CLI. + +.. note:: + * These following operations require an account that has the ``Admin`` role. + * For detailed operations, refer to ETERNUS Web GUI User's Guide or + ETERNUS CLI User's Guide for ETERNUS DX S3 series. + +#. Create an account for communication with cinder controller. + +#. Enable the SMI-S of ETERNUS DX. + +#. Register an Advanced Copy Feature license and configure copy table size. + +#. Create a storage pool for volumes. + +#. (Optional) If you want to create snapshots + on a different storage pool for volumes, + create a storage pool for snapshots. + +#. Create Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for + ``create a snapshot``. + +#. Configure storage ports used for OpenStack. + + - Set those storage ports to CA mode. + - Enable the host-affinity settings of those storage ports. + + (ETERNUS CLI command for enabling host-affinity settings): + + .. code-block:: console + + CLI> set fc-parameters -host-affinity enable -port + CLI> set iscsi-parameters -host-affinity enable -port + +#. Ensure LAN connection between cinder controller and MNT port of ETERNUS DX + and SAN connection between Compute nodes and CA ports of ETERNUS DX. + +Configuration +~~~~~~~~~~~~~ + +#. Add the following entries to ``/etc/cinder/cinder.conf``: + + FC entries: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver + cinder_eternus_config_file = /etc/cinder/eternus_dx.xml + + iSCSI entries: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver + cinder_eternus_config_file = /etc/cinder/eternus_dx.xml + + If there is no description about ``cinder_eternus_config_file``, + then the parameter is set to default value + ``/etc/cinder/cinder_fujitsu_eternus_dx.xml``. + +#. Create a driver configuration file. + + Create a driver configuration file in the file path specified + as ``cinder_eternus_config_file`` in ``cinder.conf``, + and add parameters to the file as below: + + FC configuration: + + .. code-block:: xml + + + + 0.0.0.0 + 5988 + smisuser + smispassword + raid5_0001 + raid5_0001 + + + iSCSI configuration: + + .. code-block:: xml + + + + 0.0.0.0 + 5988 + smisuser + smispassword + raid5_0001 + raid5_0001 + 1.1.1.1 + 1.1.1.2 + 1.1.1.3 + 1.1.1.4 + + + Where: + + ``EternusIP`` + IP address for the SMI-S connection of the ETRENUS DX. + + Enter the IP address of MNT port of the ETERNUS DX. + + ``EternusPort`` + Port number for the SMI-S connection port of the ETERNUS DX. + + ``EternusUser`` + User name for the SMI-S connection of the ETERNUS DX. + + ``EternusPassword`` + Password for the SMI-S connection of the ETERNUS DX. + + ``EternusPool`` + Storage pool name for volumes. + + Enter RAID Group name or TPP name in the ETERNUS DX. + + ``EternusSnapPool`` + Storage pool name for snapshots. + + Enter RAID Group name in the ETERNUS DX. + + ``EternusISCSIIP`` (Multiple setting allowed) + iSCSI connection IP address of the ETERNUS DX. + + .. note:: + + * For ``EternusSnapPool``, you can specify only RAID Group name + and cannot specify TPP name. + * You can specify the same RAID Group name for ``EternusPool`` and ``EternusSnapPool`` + if you create volumes and snapshots on a same storage pool. + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +#. Edit ``cinder.conf``: + + .. code-block:: ini + + [DEFAULT] + enabled_backends = DXFC, DXISCSI + + [DXFC] + volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver + cinder_eternus_config_file = /etc/cinder/fc.xml + volume_backend_name = FC + + [DXISCSI] + volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver + cinder_eternus_config_file = /etc/cinder/iscsi.xml + volume_backend_name = ISCSI + +#. Create the driver configuration files ``fc.xml`` and ``iscsi.xml``. + +#. Create a volume type and set extra specs to the type: + + .. code-block:: console + + $ openstack volume type create DX_FC + $ openstack volume type set --property volume_backend_name=FC DX_FX + $ openstack volume type create DX_ISCSI + $ openstack volume type set --property volume_backend_name=ISCSI DX_ISCSI + + By issuing these commands, + the volume type ``DX_FC`` is associated with the ``FC``, + and the type ``DX_ISCSI`` is associated with the ``ISCSI``. diff --git a/doc/source/config-reference/block-storage/drivers/hds-hnas-driver.rst b/doc/source/config-reference/block-storage/drivers/hds-hnas-driver.rst new file mode 100644 index 00000000000..fca6ff30a25 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/hds-hnas-driver.rst @@ -0,0 +1,548 @@ +========================================== +Hitachi NAS Platform NFS driver +========================================== + +This OpenStack Block Storage volume drivers provides NFS support +for `Hitachi NAS Platform (HNAS) `_ Models 3080, 3090, 4040, 4060, 4080, and 4100 +with NAS OS 12.2 or higher. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The NFS driver support these operations: + +* Create, delete, attach, and detach volumes. +* Create, list, and delete volume snapshots. +* Create a volume from a snapshot. +* Copy an image to a volume. +* Copy a volume to an image. +* Clone a volume. +* Extend a volume. +* Get volume statistics. +* Manage and unmanage a volume. +* Manage and unmanage snapshots (`HNAS NFS only`). +* List manageable volumes and snapshots (`HNAS NFS only`). + +HNAS storage requirements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Before using NFS services, use the HNAS configuration and management +GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally: + +1. General: + +* It is mandatory to have at least ``1 storage pool, 1 EVS and 1 file + system`` to be able to run any of the HNAS drivers. +* HNAS drivers consider the space allocated to the file systems to + provide the reports to cinder. So, when creating a file system, make sure + it has enough space to fit your needs. +* The file system used should not be created as a ``replication target`` and + should be mounted. +* It is possible to configure HNAS drivers to use distinct EVSs and file + systems, but ``all compute nodes and controllers`` in the cloud must have + access to the EVSs. + +2. For NFS: + +* Create NFS exports, choose a path for them (it must be different from + ``/``) and set the :guilabel: `Show snapshots` option to ``hide and + disable access``. +* For each export used, set the option ``norootsquash`` in the share + ``Access configuration`` so Block Storage services can change the + permissions of its volumes. For example, ``"* (rw, norootsquash)"``. +* Make sure that all computes and controllers have R/W access to the + shares used by cinder HNAS driver. +* In order to use the hardware accelerated features of HNAS NFS, we + recommend setting ``max-nfs-version`` to 3. Refer to Hitachi NAS Platform + command line reference to see how to configure this option. + +Block Storage host requirements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack +Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack. +The following packages must be installed in all compute, controller and +storage (if any) nodes: + +* ``nfs-utils`` for Red Hat Enterprise Linux OpenStack Platform +* ``nfs-client`` for SUSE OpenStack Cloud +* ``nfs-common``, ``libc6-i386`` for Ubuntu OpenStack + +Package installation +-------------------- + +If you are installing the driver from an RPM or DEB package, +follow the steps below: + +#. Install the dependencies: + + In Red Hat: + + .. code-block:: console + + # yum install nfs-utils nfs-utils-lib + + Or in Ubuntu: + + .. code-block:: console + + # apt-get install nfs-common + + Or in SUSE: + + .. code-block:: console + + # zypper install nfs-client + + If you are using Ubuntu 12.04, you also need to install ``libc6-i386`` + + .. code-block:: console + + # apt-get install libc6-i386 + +#. Configure the driver as described in the :ref:`hnas-driver-configuration` + section. + +#. Restart all Block Storage services (volume, scheduler, and backup). + +.. _hnas-driver-configuration: + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +HNAS supports a variety of storage options and file system capabilities, +which are selected through the definition of volume types combined with the +use of multiple back ends and multiple services. Each back end can configure +up to ``4 service pools``, which can be mapped to cinder volume types. + +The configuration for the driver is read from the back-end sections of the +``cinder.conf``. Each back-end section must have the appropriate configurations +to communicate with your HNAS back end, such as the IP address of the HNAS EVS +that is hosting your data, HNAS SSH access credentials, the configuration of +each of the services in that back end, and so on. You can find examples of such +configurations in the :ref:`configuration_example` section. + +.. note:: + HNAS cinder drivers still support the XML configuration the + same way it was in the older versions, but we recommend configuring the + HNAS cinder drivers only through the ``cinder.conf`` file, + since the XML configuration file from previous versions is being + deprecated as of Newton Release. + +.. note:: + We do not recommend the use of the same NFS export for different back ends. + If possible, configure each back end to + use a different NFS export/file system. + +The following is the definition of each configuration option that can be used +in a HNAS back-end section in the ``cinder.conf`` file: + +.. list-table:: **Configuration options in cinder.conf** + :header-rows: 1 + :widths: 25, 10, 15, 50 + + * - Option + - Type + - Default + - Description + * - ``volume_backend_name`` + - Optional + - N/A + - A name that identifies the back end and can be used as an extra-spec to + redirect the volumes to the referenced back end. + * - ``volume_driver`` + - Required + - N/A + - The python module path to the HNAS volume driver python class. When + installing through the rpm or deb packages, you should configure this + to `cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver`. + * - ``nfs_shares_config`` + - Required (only for NFS) + - /etc/cinder/nfs_shares + - Path to the ``nfs_shares`` file. This is required by the base cinder + generic NFS driver and therefore also required by the HNAS NFS driver. + This file should list, one per line, every NFS share being used by the + back end. For example, all the values found in the configuration keys + hnas_svcX_hdp in the HNAS NFS back-end sections. + * - ``hnas_mgmt_ip0`` + - Required + - N/A + - HNAS management IP address. Should be the IP address of the `Admin` + EVS. It is also the IP through which you access the web SMU + administration frontend of HNAS. + * - ``hnas_username`` + - Required + - N/A + - HNAS SSH username + * - ``hds_hnas_nfs_config_file`` + - Optional (deprecated) + - /opt/hds/hnas/cinder_nfs_conf.xml + - Path to the deprecated XML configuration file (only required if using + the XML file) + * - ``hnas_cluster_admin_ip0`` + - Optional (required only for HNAS multi-farm setups) + - N/A + - The IP of the HNAS farm admin. If your SMU controls more than one + system or cluster, this option must be set with the IP of the desired + node. This is different for HNAS multi-cluster setups, which + does not require this option to be set. + * - ``hnas_ssh_private_key`` + - Optional + - N/A + - Path to the SSH private key used to authenticate to the HNAS SMU. Only + required if you do not want to set `hnas_password`. + * - ``hnas_ssh_port`` + - Optional + - 22 + - Port on which HNAS is listening for SSH connections + * - ``hnas_password`` + - Required (unless hnas_ssh_private_key is provided) + - N/A + - HNAS password + * - ``hnas_svcX_hdp`` [1]_ + - Required (at least 1) + - N/A + - HDP (export) where the volumes will be created. Use + exports paths to configure this. + * - ``hnas_svcX_pool_name`` + - Required + - N/A + - A `unique string` that is used to refer to this pool within the + context of cinder. You can tell cinder to put volumes of a specific + volume type into this back end, within this pool. See, + ``Service Labels`` and :ref:`configuration_example` sections + for more details. + +.. [1] + Replace X with a number from 0 to 3 (keep the sequence when configuring + the driver) + +Service labels +~~~~~~~~~~~~~~ + +HNAS driver supports differentiated types of service using the service labels. +It is possible to create up to 4 types of them for each back end. (For example +gold, platinum, silver, ssd, and so on). + +After creating the services in the ``cinder.conf`` configuration file, you +need to configure one cinder ``volume_type`` per service. Each ``volume_type`` +must have the metadata service_label with the same name configured in the +``hnas_svcX_pool_name option`` of that service. See the +:ref:`configuration_example` section for more details. If the ``volume_type`` +is not set, the cinder service pool with largest available free space or +other criteria configured in scheduler filters. + +.. code-block:: console + + $ openstack volume type create default + $ openstack volume type set --property service_label=default default + $ openstack volume type create platinum-tier + $ openstack volume type set --property service_label=platinum platinum + +Multi-backend configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can deploy multiple OpenStack HNAS Driver instances (back ends) that each +controls a separate HNAS or a single HNAS. If you use multiple cinder +back ends, remember that each cinder back end can host up to 4 services. Each +back-end section must have the appropriate configurations to communicate with +your HNAS back end, such as the IP address of the HNAS EVS that is hosting +your data, HNAS SSH access credentials, the configuration of each of the +services in that back end, and so on. You can find examples of such +configurations in the :ref:`configuration_example` section. + +If you want the volumes from a volume_type to be casted into a specific +back end, you must configure an extra_spec in the ``volume_type`` with the +value of the ``volume_backend_name`` option from that back end. + +For multiple NFS back ends configuration, each back end should have a +separated ``nfs_shares_config`` and also a separated ``nfs_shares file`` +defined (For example, ``nfs_shares1``, ``nfs_shares2``) with the desired +shares listed in separated lines. + +SSH configuration +~~~~~~~~~~~~~~~~~ + +.. note:: + As of the Newton OpenStack release, the user can no longer run the + driver using a locally installed instance of the :command:`SSC` utility + package. Instead, all communications with the HNAS back end are handled + through :command:`SSH`. + +You can use your username and password to authenticate the Block Storage node +to the HNAS back end. In order to do that, simply configure ``hnas_username`` +and ``hnas_password`` in your back end section within the ``cinder.conf`` +file. + +For example: + +.. code-block:: ini + + [hnas-backend] + # ... + hnas_username = supervisor + hnas_password = supervisor + +Alternatively, the HNAS cinder driver also supports SSH authentication +through public key. To configure that: + +#. If you do not have a pair of public keys already generated, create it in + the Block Storage node (leave the pass-phrase empty): + + .. code-block:: console + + $ mkdir -p /opt/hitachi/ssh + $ ssh-keygen -f /opt/hds/ssh/hnaskey + +#. Change the owner of the key to cinder (or the user the volume service will + be run as): + + .. code-block:: console + + # chown -R cinder.cinder /opt/hitachi/ssh + +#. Create the directory ``ssh_keys`` in the SMU server: + + .. code-block:: console + + $ ssh [manager|supervisor]@ 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/' + +#. Copy the public key to the ``ssh_keys`` directory: + + .. code-block:: console + + $ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/ + +#. Access the SMU server: + + .. code-block:: console + + $ ssh [manager|supervisor]@ + +#. Run the command to register the SSH keys: + + .. code-block:: console + + $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub + +#. Check the communication with HNAS in the Block Storage node: + + For multi-farm HNAS: + + .. code-block:: console + + $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@ 'ssc df -a' + + Or, for Single-node/Multi-Cluster: + + .. code-block:: console + + $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@ 'ssc localhost df -a' + +#. Configure your backend section in ``cinder.conf`` to use your public key: + + .. code-block:: ini + + [hnas-backend] + # ... + hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey + +Managing volumes +~~~~~~~~~~~~~~~~ + +If there are some existing volumes on HNAS that you want to import to cinder, +it is possible to use the manage volume feature to do this. The manage action +on an existing volume is very similar to a volume creation. It creates a +volume entry on cinder database, but instead of creating a new volume in the +back end, it only adds a link to an existing volume. + +.. note:: + It is an admin only feature and you have to be logged as an user + with admin rights to be able to use this. + +#. Under the :menuselection:`System > Volumes` tab, + choose the option :guilabel:`Manage Volume`. + +#. Fill the fields :guilabel:`Identifier`, :guilabel:`Host`, + :guilabel:`Volume Name`, and :guilabel:`Volume Type` with volume + information to be managed: + + * :guilabel:`Identifier`: ip:/type/volume_name (*For example:* + 172.24.44.34:/silver/volume-test) + * :guilabel:`Host`: `host@backend-name#pool_name` (*For example:* + `ubuntu@hnas-nfs#test_silver`) + * :guilabel:`Volume Name`: volume_name (*For example:* volume-test) + * :guilabel:`Volume Type`: choose a type of volume (*For example:* silver) + +By CLI: + +.. code-block:: console + + $ cinder manage [--id-type ][--name ][--description ] + [--volume-type ][--availability-zone ] + [--metadata [ [ ...]]][--bootable] + +Example: + +.. code-block:: console + + $ cinder manage --name volume-test --volume-type silver + ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test + +Managing snapshots +~~~~~~~~~~~~~~~~~~ + +The manage snapshots feature works very similarly to the manage volumes +feature, currently supported on HNAS cinder drivers. So, if you have a volume +already managed by cinder which has snapshots that are not managed by cinder, +it is possible to use manage snapshots to import these snapshots and link them +with their original volume. + +.. note:: + For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes + that were created using :command:`file-clone-create`, not the HNAS + :command:`snapshot-\*` feature. Check the HNAS users + documentation to have details about those 2 features. + +Currently, the manage snapshots function does not support importing snapshots +(generally created by storage's :command:`file-clone` operation) +``without parent volumes`` or when the parent volume is ``in-use``. In this +case, the ``manage volumes`` should be used to import the snapshot as a normal +cinder volume. + +Also, it is an admin only feature and you have to be logged as a user with +admin rights to be able to use this. + +.. note:: + Although there is a verification to prevent importing snapshots using + non-related volumes as parents, it is possible to manage a snapshot using + any related cloned volume. So, when managing a snapshot, it is extremely + important to make sure that you are using the correct parent volume. + +.. code-block:: console + + $ cinder snapshot-manage + +* :guilabel:`Identifier`: evs_ip:/export_name/snapshot_name + (*For example:* 172.24.44.34:/export1/snapshot-test) + +* :guilabel:`Volume`: Parent volume ID (*For example:* + 061028c0-60cf-499f-99e2-2cd6afea081f) + +Example: + +.. code-block:: console + + $ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test + +.. note:: + This feature is currently available only for HNAS NFS Driver. + +.. _configuration_example: + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +Below are configuration examples for NFS backend: + +#. HNAS NFS Driver + + #. For HNAS NFS driver, create this section in your ``cinder.conf`` file: + + .. code-block:: ini + + [hnas-nfs] + volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver + nfs_shares_config = /home/cinder/nfs_shares + volume_backend_name = hnas_nfs_backend + hnas_username = supervisor + hnas_password = supervisor + hnas_mgmt_ip0 = 172.24.44.15 + + hnas_svc0_pool_name = nfs_gold + hnas_svc0_hdp = 172.24.49.21:/gold_export + + hnas_svc1_pool_name = nfs_platinum + hnas_svc1_hdp = 172.24.49.21:/silver_platinum + + hnas_svc2_pool_name = nfs_silver + hnas_svc2_hdp = 172.24.49.22:/silver_export + + hnas_svc3_pool_name = nfs_bronze + hnas_svc3_hdp = 172.24.49.23:/bronze_export + + #. Add it to the ``enabled_backends`` list, under the ``DEFAULT`` section + of your ``cinder.conf`` file: + + .. code-block:: ini + + [DEFAULT] + enabled_backends = hnas-nfs + + #. Add the configured exports to the ``nfs_shares`` file: + + .. code-block:: vim + + 172.24.49.21:/gold_export + 172.24.49.21:/silver_platinum + 172.24.49.22:/silver_export + 172.24.49.23:/bronze_export + + #. Register a volume type with cinder and associate it with + this backend: + + .. code-block:: console + + $ openstack volume type create hnas_nfs_gold + $ openstack volume type set --property volume_backend_name=hnas_nfs_backend \ + service_label=nfs_gold hnas_nfs_gold + $ openstack volume type create hnas_nfs_platinum + $ openstack volume type set --property volume_backend_name=hnas_nfs_backend \ + service_label=nfs_platinum hnas_nfs_platinum + $ openstack volume type create hnas_nfs_silver + $ openstack volume type set --property volume_backend_name=hnas_nfs_backend \ + service_label=nfs_silver hnas_nfs_silver + $ openstack volume type create hnas_nfs_bronze + $ openstack volume type set --property volume_backend_name=hnas_nfs_backend \ + service_label=nfs_bronze hnas_nfs_bronze + +Additional notes and limitations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* The ``get_volume_stats()`` function always provides the available + capacity based on the combined sum of all the HDPs that are used in + these services labels. + +* After changing the configuration on the storage node, the Block Storage + driver must be restarted. + +* On Red Hat, if the system is configured to use SELinux, you need to + set ``virt_use_nfs = on`` for NFS driver work properly. + + .. code-block:: console + + # setsebool -P virt_use_nfs on + +* It is not possible to manage a volume if there is a slash (``/``) or + a colon (``:``) in the volume name. + +* File system ``auto-expansion``: Although supported, we do not recommend using + file systems with auto-expansion setting enabled because the scheduler uses + the file system capacity reported by the driver to determine if new volumes + can be created. For instance, in a setup with a file system that can expand + to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not + allow a 15GB volume to be created. In this case, manual expansion would + have to be triggered by an administrator. We recommend always creating the + file system at the ``maximum capacity`` or periodically expanding the file + system manually. + +* The ``hnas_svcX_pool_name`` option must be unique for a given back end. It + is still possible to use the deprecated form ``hnas_svcX_volume_type``, but + this support will be removed in a future release. + +* SSC simultaneous connections limit: In very busy environments, if 2 or + more volume hosts are configured to use the same storage, some requests + (create, delete and so on) can have some attempts failed and re-tried ( + ``5 attempts`` by default) due to an HNAS connection limitation ( + ``max of 5`` simultaneous connections). diff --git a/doc/source/config-reference/block-storage/drivers/hitachi-storage-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/hitachi-storage-volume-driver.rst new file mode 100644 index 00000000000..5116ec15f57 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/hitachi-storage-volume-driver.rst @@ -0,0 +1,169 @@ +============================= +Hitachi storage volume driver +============================= + +Hitachi storage volume driver provides iSCSI and Fibre Channel +support for Hitachi storages. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +Supported storages: + +* Hitachi Virtual Storage Platform G1000 (VSP G1000) +* Hitachi Virtual Storage Platform (VSP) +* Hitachi Unified Storage VM (HUS VM) +* Hitachi Unified Storage 100 Family (HUS 100 Family) + +Required software: + +* RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM +* Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later + for HUS 100 Family + + .. note:: + + HSNM2 needs to be installed under ``/usr/stonavm``. + +Required licenses: + +* Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM +* (Mandatory) ShadowImage in-system replication for HUS 100 Family +* (Optional) Copy-on-Write Snapshot for HUS 100 Family + +Additionally, the ``pexpect`` package is required. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. +* Create, list, and delete volume snapshots. +* Manage and unmanage volume snapshots. +* Create a volume from a snapshot. +* Copy a volume to an image. +* Copy an image to a volume. +* Clone a volume. +* Extend a volume. +* Get volume statistics. + +Configuration +~~~~~~~~~~~~~ + +Set up Hitachi storage +---------------------- + +You need to specify settings as described below. For details about each step, +see the user's guide of the storage device. Use a storage administrative +software such as ``Storage Navigator`` to set up the storage device so that +LDEVs and host groups can be created and deleted, and LDEVs can be connected +to the server and can be asynchronously copied. + +#. Create a Dynamic Provisioning pool. + +#. Connect the ports at the storage to the controller node and compute nodes. + +#. For VSP G1000/VSP/HUS VM, set ``port security`` to ``enable`` for the + ports at the storage. + +#. For HUS 100 Family, set ``Host Group security`` or + ``iSCSI target security`` to ``ON`` for the ports at the storage. + +#. For the ports at the storage, create host groups (iSCSI targets) whose + names begin with HBSD- for the controller node and each compute node. + Then register a WWN (initiator IQN) for each of the controller node and + compute nodes. + +#. For VSP G1000/VSP/HUS VM, perform the following: + + * Create a storage device account belonging to the Administrator User + Group. (To use multiple storage devices, create the same account name + for all the target storage devices, and specify the same resource + group and permissions.) + * Create a command device (In-Band), and set user authentication to ``ON``. + * Register the created command device to the host group for the controller + node. + * To use the Thin Image function, create a pool for Thin Image. + +#. For HUS 100 Family, perform the following: + + * Use the :command:`auunitaddauto` command to register the + unit name and controller of the storage device to HSNM2. + * When connecting via iSCSI, if you are using CHAP certification, specify + the same user and password as that used for the storage port. + +Set up Hitachi Gigabit Fibre Channel adaptor +-------------------------------------------- + +Change a parameter of the hfcldd driver and update the ``initram`` file +if Hitachi Gigabit Fibre Channel adaptor is used: + +.. code-block:: console + + # /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1 + # dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION + # reboot + +Set up Hitachi storage volume driver +------------------------------------ + +#. Create a directory: + + .. code-block:: console + + # mkdir /var/lock/hbsd + # chown cinder:cinder /var/lock/hbsd + +#. Create ``volume type`` and ``volume key``. + + This example shows that HUS100_SAMPLE is created as ``volume type`` + and hus100_backend is registered as ``volume key``: + + .. code-block:: console + + $ openstack volume type create HUS100_SAMPLE + $ openstack volume type set --property volume_backend_name=hus100_backend HUS100_SAMPLE + +#. Specify any identical ``volume type`` name and ``volume key``. + + To confirm the created ``volume type``, please execute the following + command: + + .. code-block:: console + + $ openstack volume type list --long + +#. Edit the ``/etc/cinder/cinder.conf`` file as follows. + + If you use Fibre Channel: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver + + If you use iSCSI: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver + + Also, set ``volume_backend_name`` created by :command:`openstack volume type set` + command: + + .. code-block:: ini + + volume_backend_name = hus100_backend + + This table shows configuration options for Hitachi storage volume driver. + + .. include:: ../../tables/cinder-hitachi-hbsd.rst + +#. Restart the Block Storage service. + + When the startup is done, "MSGID0003-I: The storage backend can be used." + is output into ``/var/log/cinder/volume.log`` as follows: + + .. code-block:: console + + 2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. + hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] + MSGID0003-I: The storage backend can be used. (config_group: hus100_backend) diff --git a/doc/source/config-reference/block-storage/drivers/hp-msa-driver.rst b/doc/source/config-reference/block-storage/drivers/hp-msa-driver.rst new file mode 100644 index 00000000000..bb348d2f983 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/hp-msa-driver.rst @@ -0,0 +1,165 @@ +====================================== +HP MSA Fibre Channel and iSCSI drivers +====================================== + +The ``HPMSAFCDriver`` and ``HPMSAISCSIDriver`` Cinder drivers allow HP MSA +2040 or 1040 arrays to be used for Block Storage in OpenStack deployments. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the HP MSA drivers, the following are required: + +- HP MSA 2040 or 1040 array with: + + - iSCSI or FC host interfaces + - G22x firmware or later + +- Network connectivity between the OpenStack host and the array management + interfaces + +- HTTPS or HTTP must be enabled on the array + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Migrate a volume with back-end assistance. +- Retype a volume. +- Manage and unmanage a volume. + +Configuring the array +~~~~~~~~~~~~~~~~~~~~~ + +#. Verify that the array can be managed via an HTTPS connection. HTTP can also + be used if ``hpmsa_api_protocol=http`` is placed into the appropriate + sections of the ``cinder.conf`` file. + + Confirm that virtual pools A and B are present if you plan to use virtual + pools for OpenStack storage. + + If you plan to use vdisks instead of virtual pools, create or identify one + or more vdisks to be used for OpenStack storage; typically this will mean + creating or setting aside one disk group for each of the A and B + controllers. + +#. Edit the ``cinder.conf`` file to define a storage back end entry for each + storage pool on the array that will be managed by OpenStack. Each entry + consists of a unique section name, surrounded by square brackets, followed + by options specified in a ``key=value`` format. + + * The ``hpmsa_backend_name`` value specifies the name of the storage pool + or vdisk on the array. + + * The ``volume_backend_name`` option value can be a unique value, if you + wish to be able to assign volumes to a specific storage pool on the + array, or a name that is shared among multiple storage pools to let the + volume scheduler choose where new volumes are allocated. + + * The rest of the options will be repeated for each storage pool in a given + array: the appropriate Cinder driver name; IP address or host name of the + array management interface; the username and password of an array user + account with ``manage`` privileges; and the iSCSI IP addresses for the + array if using the iSCSI transport protocol. + + In the examples below, two back ends are defined, one for pool A and one for + pool B, and a common ``volume_backend_name`` is used so that a single + volume type definition can be used to allocate volumes from both pools. + + **iSCSI example back-end entries** + + .. code-block:: ini + + [pool-a] + hpmsa_backend_name = A + volume_backend_name = hpmsa-array + volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5 + + [pool-b] + hpmsa_backend_name = B + volume_backend_name = hpmsa-array + volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5 + + **Fibre Channel example back-end entries** + + .. code-block:: ini + + [pool-a] + hpmsa_backend_name = A + volume_backend_name = hpmsa-array + volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + + [pool-b] + hpmsa_backend_name = B + volume_backend_name = hpmsa-array + volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + +#. If any ``volume_backend_name`` value refers to a vdisk rather than a + virtual pool, add an additional statement ``hpmsa_backend_type = linear`` + to that back end entry. + +#. If HTTPS is not enabled in the array, include ``hpmsa_api_protocol = http`` + in each of the back-end definitions. + +#. If HTTPS is enabled, you can enable certificate verification with the option + ``hpmsa_verify_certificate=True``. You may also use the + ``hpmsa_verify_certificate_path`` parameter to specify the path to a + CA\_BUNDLE file containing CAs other than those in the default list. + +#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an + ``enabled_back-ends`` parameter specifying the backend entries you added, + and a ``default_volume_type`` parameter specifying the name of a volume type + that you will create in the next step. + + **Example of [DEFAULT] section changes** + + .. code-block:: ini + + [DEFAULT] + enabled_backends = pool-a,pool-b + default_volume_type = hpmsa + + +#. Create a new volume type for each distinct ``volume_backend_name`` value + that you added in the ``cinder.conf`` file. The example below assumes that + the same ``volume_backend_name=hpmsa-array`` option was specified in all + of the entries, and specifies that the volume type ``hpmsa`` can be used to + allocate volumes from any of them. + + **Example of creating a volume type** + + .. code-block:: console + + $ openstack volume type create hpmsa + $ openstack volume type set --property volume_backend_name=hpmsa-array hpmsa + +#. After modifying the ``cinder.conf`` file, restart the ``cinder-volume`` + service. + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific to +the HP MSA drivers. + +.. include:: ../../tables/cinder-hpmsa.rst diff --git a/doc/source/config-reference/block-storage/drivers/hpe-3par-driver.rst b/doc/source/config-reference/block-storage/drivers/hpe-3par-driver.rst new file mode 100644 index 00000000000..5fb875f3711 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/hpe-3par-driver.rst @@ -0,0 +1,384 @@ +======================================== +HPE 3PAR Fibre Channel and iSCSI drivers +======================================== + +The ``HPE3PARFCDriver`` and ``HPE3PARISCSIDriver`` drivers, which are based on +the Block Storage service (Cinder) plug-in architecture, run volume operations +by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH +connections. The HTTP and HTTPS communications use ``python-3parclient``, +which is part of the Python standard library. + +For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR +user documentation. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the HPE 3PAR drivers, install the following software and components on +the HPE 3PAR storage system: + +* HPE 3PAR Operating System software version 3.1.3 MU1 or higher. + + * Deduplication provisioning requires SSD disks and HPE 3PAR Operating + System software version 3.2.1 MU1 or higher. + + * Enabling Flash Cache Policy requires the following: + + * Array must contain SSD disks. + + * HPE 3PAR Operating System software version 3.2.1 MU2 or higher. + + * python-3parclient version 4.2.0 or newer. + + * Array must have the Adaptive Flash Cache license installed. + + * Flash Cache must be enabled on the array with the CLI command + :command:`createflashcache SIZE`, where size must be in 16 GB increments. + For example, :command:`createflashcache 128g` will create 128 GB of Flash + Cache for each node pair in the array. + + * The Dynamic Optimization license is required to support any feature that + results in a volume changing provisioning type or CPG. This may apply to + the volume :command:`migrate`, :command:`retype` and :command:`manage` + commands. + + * The Virtual Copy License is required to support any feature that involves + volume snapshots. This applies to the volume :command:`snapshot-*` + commands. + +* HPE 3PAR drivers will now check the licenses installed on the array and + disable driver capabilities based on available licenses. This will apply to + thin provisioning, QoS support and volume replication. + +* HPE 3PAR Web Services API Server must be enabled and running. + +* One Common Provisioning Group (CPG). + +* Additionally, you must install the ``python-3parclient`` version 4.2.0 or + newer from the Python standard library on the system with the enabled Block + Storage service volume drivers. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +* Migrate a volume with back-end assistance. + +* Retype a volume. + +* Manage and unmanage a volume. + +* Manage and unmanage a snapshot. + +* Replicate host volumes. + +* Fail-over host volumes. + +* Fail-back host volumes. + +* Create, delete, update, snapshot, and clone consistency groups. + +* Create and delete consistency group snapshots. + +* Create a consistency group from a consistency group snapshot or another + group. + +Volume type support for both HPE 3PAR drivers includes the ability to set the +following capabilities in the OpenStack Block Storage API +``cinder.api.contrib.types_extra_specs`` volume type extra specs extension +module: + +* ``hpe3par:snap_cpg`` + +* ``hpe3par:provisioning`` + +* ``hpe3par:persona`` + +* ``hpe3par:vvs`` + +* ``hpe3par:flash_cache`` + +To work with the default filter scheduler, the key values are case sensitive +and scoped with ``hpe3par:``. For information about how to set the key-value +pairs and associate them with a volume type, run the following command: + +.. code-block:: console + + $ openstack help volume type + +.. note:: + + Volumes that are cloned only support the extra specs keys cpg, snap_cpg, + provisioning and vvs. The others are ignored. In addition the comments + section of the cloned volume in the HPE 3PAR StoreServ storage array is + not populated. + +If volume types are not used or a particular key is not set for a volume type, +the following defaults are used: + +* ``hpe3par:cpg`` - Defaults to the ``hpe3par_cpg`` setting in the + ``cinder.conf`` file. + +* ``hpe3par:snap_cpg`` - Defaults to the ``hpe3par_snap`` setting in + the ``cinder.conf`` file. If ``hpe3par_snap`` is not set, it defaults + to the ``hpe3par_cpg`` setting. + +* ``hpe3par:provisioning`` - Defaults to ``thin`` provisioning, the valid + values are ``thin``, ``full``, and ``dedup``. + +* ``hpe3par:persona`` - Defaults to the ``2 - Generic-ALUA`` persona. The + valid values are: + + * ``1 - Generic`` + * ``2 - Generic-ALUA`` + * ``3 - Generic-legacy`` + * ``4 - HPUX-legacy`` + * ``5 - AIX-legacy`` + * ``6 - EGENERA`` + * ``7 - ONTAP-legacy`` + * ``8 - VMware`` + * ``9 - OpenVMS`` + * ``10 - HPUX`` + * ``11 - WindowsServer`` + +* ``hpe3par:flash_cache`` - Defaults to ``false``, the valid values are + ``true`` and ``false``. + +QoS support for both HPE 3PAR drivers includes the ability to set the +following capabilities in the OpenStack Block Storage API +``cinder.api.contrib.qos_specs_manage`` qos specs extension module: + +* ``minBWS`` + +* ``maxBWS`` + +* ``minIOPS`` + +* ``maxIOPS`` + +* ``latency`` + +* ``priority`` + +The qos keys above no longer require to be scoped but must be created and +associated to a volume type. For information about how to set the key-value +pairs and associate them with a volume type, run the following commands: + +.. code-block:: console + + $ openstack help volume qos + +The following keys require that the HPE 3PAR StoreServ storage array has a +Priority Optimization license installed. + +``hpe3par:vvs`` + The virtual volume set name that has been predefined by the Administrator + with quality of service (QoS) rules associated to it. If you specify + extra_specs ``hpe3par:vvs``, the qos_specs ``minIOPS``, ``maxIOPS``, + ``minBWS``, and ``maxBWS`` settings are ignored. + +``minBWS`` + The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue + bandwidth rate has no minimum goal. + +``maxBWS`` + The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue + bandwidth rate has no limit. + +``minIOPS`` + The QoS I/O issue count minimum goal. If not set, the I/O issue count has no + minimum goal. + +``maxIOPS`` + The QoS I/O issue count rate limit. If not set, the I/O issue count rate has + no limit. + +``latency`` + The latency goal in milliseconds. + +``priority`` + The priority of the QoS rule over other rules. If not set, the priority is + ``normal``, valid values are ``low``, ``normal`` and ``high``. + +.. note:: + + Since the Icehouse release, minIOPS and maxIOPS must be used together to + set I/O limits. Similarly, minBWS and maxBWS must be used together. If only + one is set the other will be set to the same value. + +The following key requires that the HPE 3PAR StoreServ storage array has an +Adaptive Flash Cache license installed. + +* ``hpe3par:flash_cache`` - The flash-cache policy, which can be turned on and + off by setting the value to ``true`` or ``false``. + +LDAP and AD authentication is now supported in the HPE 3PAR driver. + +The 3PAR back end must be properly configured for LDAP and AD authentication +prior to configuring the volume driver. For details on setting up LDAP with +3PAR, see the 3PAR user guide. + +Once configured, ``hpe3par_username`` and ``hpe3par_password`` parameters in +``cinder.conf`` can be used with LDAP and AD credentials. + +Enable the HPE 3PAR Fibre Channel and iSCSI drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``HPE3PARFCDriver`` and ``HPE3PARISCSIDriver`` are installed with the +OpenStack software. + +#. Install the ``python-3parclient`` Python package on the OpenStack Block + Storage system. + + .. code-block:: console + + $ pip install 'python-3parclient>=4.0,<5.0' + + +#. Verify that the HPE 3PAR Web Services API server is enabled and running on + the HPE 3PAR storage system. + + a. Log onto the HP 3PAR storage system with administrator access. + + .. code-block:: console + + $ ssh 3paradm@ + + b. View the current state of the Web Services API Server. + + .. code-block:: console + + $ showwsapi + -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version- + Enabled Active Enabled 8008 Enabled 8080 1.1 + + c. If the Web Services API Server is disabled, start it. + + .. code-block:: console + + $ startwsapi + +#. If the HTTP or HTTPS state is disabled, enable one of them. + + .. code-block:: console + + $ setwsapi -http enable + + or + + .. code-block:: console + + $ setwsapi -https enable + + .. note:: + + To stop the Web Services API Server, use the :command:`stopwsapi` command. For + other options run the :command:`setwsapi –h` command. + +#. If you are not using an existing CPG, create a CPG on the HPE 3PAR storage + system to be used as the default location for creating volumes. + +#. Make the following changes in the ``/etc/cinder/cinder.conf`` file. + + .. code-block:: ini + + # 3PAR WS API Server URL + hpe3par_api_url=https://10.10.0.141:8080/api/v1 + + # 3PAR username with the 'edit' role + hpe3par_username=edit3par + + # 3PAR password for the user specified in hpe3par_username + hpe3par_password=3parpass + + # 3PAR CPG to use for volume creation + hpe3par_cpg=OpenStackCPG_RAID5_NL + + # IP address of SAN controller for SSH access to the array + san_ip=10.10.22.241 + + # Username for SAN controller for SSH access to the array + san_login=3paradm + + # Password for SAN controller for SSH access to the array + san_password=3parpass + + # FIBRE CHANNEL(uncomment the next line to enable the FC driver) + # volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver + + # iSCSI (uncomment the next line to enable the iSCSI driver and + # hpe3par_iscsi_ips or iscsi_ip_address) + #volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver + + # iSCSI multiple port configuration + # hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234 + + # Still available for single port iSCSI configuration + #iscsi_ip_address=10.10.220.253 + + + # Enable HTTP debugging to 3PAR + hpe3par_debug=False + + # Enable CHAP authentication for iSCSI connections. + hpe3par_iscsi_chap_enabled=false + + # The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be + # used. + hpe3par_snap_cpg=OpenStackSNAP_CPG + + # Time in hours to retain a snapshot. You can't delete it before this + # expires. + hpe3par_snapshot_retention=48 + + # Time in hours when a snapshot expires and is deleted. This must be + # larger than retention. + hpe3par_snapshot_expiration=72 + + # The ratio of oversubscription when thin provisioned volumes are + # involved. Default ratio is 20.0, this means that a provisioned + # capacity can be 20 times of the total physical capacity. + max_over_subscription_ratio=20.0 + + # This flag represents the percentage of reserved back-end capacity. + reserved_percentage=15 + + .. note:: + + You can enable only one driver on each cinder instance unless you enable + multiple back-end support. See the Cinder multiple back-end support + instructions to enable this feature. + + .. note:: + + You can configure one or more iSCSI addresses by using the + ``hpe3par_iscsi_ips`` option. Separate multiple IP addresses with a + comma (``,``). When you configure multiple addresses, the driver selects + the iSCSI port with the fewest active volumes at attach time. The 3PAR + array does not allow the default port 3260 to be changed, so IP ports + need not be specified. + +#. Save the changes to the ``cinder.conf`` file and restart the cinder-volume + service. + +The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your +OpenStack system. If you experience problems, review the Block Storage +service log files for errors. + +The following table contains all the configuration options supported by +the HPE 3PAR Fibre Channel and iSCSI drivers. + +.. include:: ../../tables/cinder-hpe3par.rst diff --git a/doc/source/config-reference/block-storage/drivers/hpe-lefthand-driver.rst b/doc/source/config-reference/block-storage/drivers/hpe-lefthand-driver.rst new file mode 100644 index 00000000000..0026246b1b4 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/hpe-lefthand-driver.rst @@ -0,0 +1,216 @@ +================================ +HPE LeftHand/StoreVirtual driver +================================ + +The ``HPELeftHandISCSIDriver`` is based on the Block Storage service plug-in +architecture. Volume operations are run by communicating with the HPE +LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS +communications use the ``python-lefthandclient``, which is part of the Python +standard library. + +The ``HPELeftHandISCSIDriver`` can be configured to run using a REST client to +communicate with the array. For performance improvements and new functionality +the ``python-lefthandclient`` must be downloaded, and HP LeftHand/StoreVirtual +Operating System software version 11.5 or higher is required on the array. To +configure the driver in standard mode, see +`HPE LeftHand/StoreVirtual REST driver`_. + +For information about how to manage HPE LeftHand/StoreVirtual storage systems, +see the HPE LeftHand/StoreVirtual user documentation. + +HPE LeftHand/StoreVirtual REST driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to configure the HPE LeftHand/StoreVirtual Block +Storage driver. + +System requirements +------------------- + +To use the HPE LeftHand/StoreVirtual driver, do the following: + +* Install LeftHand/StoreVirtual Operating System software version 11.5 or + higher on the HPE LeftHand/StoreVirtual storage system. + +* Create a cluster group. + +* Install the ``python-lefthandclient`` version 2.1.0 from the Python Package + Index on the system with the enabled Block Storage service + volume drivers. + +Supported operations +-------------------- + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +* Get volume statistics. + +* Migrate a volume with back-end assistance. + +* Retype a volume. + +* Manage and unmanage a volume. + +* Manage and unmanage a snapshot. + +* Replicate host volumes. + +* Fail-over host volumes. + +* Fail-back host volumes. + +* Create, delete, update, and snapshot consistency groups. + +When you use back end assisted volume migration, both source and destination +clusters must be in the same HPE LeftHand/StoreVirtual management group. +The HPE LeftHand/StoreVirtual array will use native LeftHand APIs to migrate +the volume. The volume cannot be attached or have snapshots to migrate. + +Volume type support for the driver includes the ability to set the +following capabilities in the Block Storage API +``cinder.api.contrib.types_extra_specs`` volume type extra specs +extension module. + +* ``hpelh:provisioning`` + +* ``hpelh:ao`` + +* ``hpelh:data_pl`` + +To work with the default filter scheduler, the key-value pairs are +case-sensitive and scoped with ``hpelh:``. For information about how to set +the key-value pairs and associate them with a volume type, run the following +command: + +.. code-block:: console + + $ openstack help volume type + +* The following keys require the HPE LeftHand/StoreVirtual storage + array be configured for: + + ``hpelh:ao`` + The HPE LeftHand/StoreVirtual storage array must be configured for + Adaptive Optimization. + + ``hpelh:data_pl`` + The HPE LeftHand/StoreVirtual storage array must be able to support the + Data Protection level specified by the extra spec. + +* If volume types are not used or a particular key is not set for a volume + type, the following defaults are used: + + ``hpelh:provisioning`` + Defaults to ``thin`` provisioning, the valid values are, ``thin`` and + ``full`` + + ``hpelh:ao`` + Defaults to ``true``, the valid values are, ``true`` and ``false``. + + ``hpelh:data_pl`` + Defaults to ``r-0``, Network RAID-0 (None), the valid values are, + + * ``r-0``, Network RAID-0 (None) + + * ``r-5``, Network RAID-5 (Single Parity) + + * ``r-10-2``, Network RAID-10 (2-Way Mirror) + + * ``r-10-3``, Network RAID-10 (3-Way Mirror) + + * ``r-10-4``, Network RAID-10 (4-Way Mirror) + + * ``r-6``, Network RAID-6 (Dual Parity) + +Enable the HPE LeftHand/StoreVirtual iSCSI driver +------------------------------------------------- + +The ``HPELeftHandISCSIDriver`` is installed with the OpenStack software. + +#. Install the ``python-lefthandclient`` Python package on the OpenStack Block + Storage system. + + .. code-block:: console + + $ pip install 'python-lefthandclient>=2.1,<3.0' + +#. If you are not using an existing cluster, create a cluster on the HPE + LeftHand storage system to be used as the cluster for creating volumes. + +#. Make the following changes in the ``/etc/cinder/cinder.conf`` file: + + .. code-block:: ini + + # LeftHand WS API Server URL + hpelefthand_api_url=https://10.10.0.141:8081/lhos + + # LeftHand Super user username + hpelefthand_username=lhuser + + # LeftHand Super user password + hpelefthand_password=lhpass + + # LeftHand cluster to use for volume creation + hpelefthand_clustername=ClusterLefthand + + # LeftHand iSCSI driver + volume_driver=cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver + + # Should CHAPS authentication be used (default=false) + hpelefthand_iscsi_chap_enabled=false + + # Enable HTTP debugging to LeftHand (default=false) + hpelefthand_debug=false + + # The ratio of oversubscription when thin provisioned volumes are + # involved. Default ratio is 20.0, this means that a provisioned capacity + # can be 20 times of the total physical capacity. + max_over_subscription_ratio=20.0 + + # This flag represents the percentage of reserved back-end capacity. + reserved_percentage=15 + + You can enable only one driver on each cinder instance unless you enable + multiple back end support. See the Cinder multiple back end support + instructions to enable this feature. + + If the ``hpelefthand_iscsi_chap_enabled`` is set to ``true``, the driver + will associate randomly-generated CHAP secrets with all hosts on the HPE + LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets + when creating iSCSI connections. + + .. important:: + + CHAP secrets are passed from OpenStack Block Storage to Compute in clear + text. This communication should be secured to ensure that CHAP secrets + are not discovered. + + .. note:: + + CHAP secrets are added to existing hosts as well as newly-created ones. + If the CHAP option is enabled, hosts will not be able to access the + storage without the generated secrets. + +#. Save the changes to the ``cinder.conf`` file and restart the + ``cinder-volume`` service. + +The HPE LeftHand/StoreVirtual driver is now enabled on your OpenStack system. +If you experience problems, review the Block Storage service log files for +errors. + +.. note:: + Previous versions implement a HPE LeftHand/StoreVirtual CLIQ driver that + enable the Block Storage service driver configuration in legacy mode. This + is removed from Mitaka onwards. diff --git a/doc/source/config-reference/block-storage/drivers/huawei-storage-driver.rst b/doc/source/config-reference/block-storage/drivers/huawei-storage-driver.rst new file mode 100644 index 00000000000..53dd380c672 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/huawei-storage-driver.rst @@ -0,0 +1,516 @@ +==================== +Huawei volume driver +==================== + +Huawei volume driver can be used to provide functions such as the logical +volume and snapshot for virtual machines (VMs) in the OpenStack Block Storage +driver that supports iSCSI and Fibre Channel protocols. + +Version mappings +~~~~~~~~~~~~~~~~ + +The following table describes the version mappings among the Block Storage +driver, Huawei storage system and OpenStack: + +.. list-table:: **Version mappings among the Block Storage driver and Huawei + storage system** + :widths: 30 35 + :header-rows: 1 + + * - Description + - Storage System Version + * - Create, delete, expand, attach, detach, manage and unmanage volumes + + Create volumes with assigned storage pools + + Create volumes with assigned disk types + + Create, delete and update a consistency group + + Copy an image to a volume + + Copy a volume to an image + + Auto Zoning + + SmartThin + + Volume Migration + + Replication V2.1 + + Create, delete, manage, unmanage and backup snapshots + + Create and delete a cgsnapshot + - OceanStor T series V2R2 C00/C20/C30 + + OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20 + + OceanStor 2200V3 V300R005C00 + + OceanStor 2600V3 V300R005C00 + + OceanStor 18500/18800 V1R1C00/C20/C30 V3R3C00 + + OceanStor Dorado V300R001C00 + + OceanStor V3 V300R006C00 + + OceanStor 2200V3 V300R006C00 + + OceanStor 2600V3 V300R006C00 + * - Clone a volume + + Create volume from snapshot + + Retype + + SmartQoS + + SmartTier + + SmartCache + + Thick + - OceanStor T series V2R2 C00/C20/C30 + + OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20 + + OceanStor 2200V3 V300R005C00 + + OceanStor 2600V3 V300R005C00 + + OceanStor 18500/18800V1R1C00/C20/C30 + + OceanStor V3 V300R006C00 + + OceanStor 2200V3 V300R006C00 + + OceanStor 2600V3 V300R006C00 + * - SmartPartition + - OceanStor T series V2R2 C00/C20/C30 + + OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00/C10/C20 + + OceanStor 2600V3 V300R005C00 + + OceanStor 18500/18800V1R1C00/C20/C30 + + OceanStor V3 V300R006C00 + + OceanStor 2600V3 V300R006C00 + * - Hypermetro + + Hypermetro consistency group + - OceanStor V3 V3R3C00/C10/C20 + + OceanStor 2600V3 V3R5C00 + + OceanStor 18500/18800 V3R3C00 + + OceanStor Dorado V300R001C00 + + OceanStor V3 V300R006C00 + + OceanStor 2600V3 V300R006C00 + +Volume driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to configure the Huawei volume driver for either +iSCSI storage or Fibre Channel storage. + +**Pre-requisites** + +When creating a volume from image, install the ``multipath`` tool and add the +following configuration keys in the ``[DEFAULT]`` configuration group of +the ``/etc/cinder/cinder.conf`` file: + +.. code-block:: ini + + use_multipath_for_image_xfer = True + enforce_multipath_for_image_xfer = True + +To configure the volume driver, follow the steps below: + +#. In ``/etc/cinder``, create a Huawei-customized driver configuration file. + The file format is XML. +#. Change the name of the driver configuration file based on the site + requirements, for example, ``cinder_huawei_conf.xml``. +#. Configure parameters in the driver configuration file. + + Each product has its own value for the ``Product`` parameter under the + ``Storage`` xml block. The full xml file with the appropriate ``Product`` + parameter is as below: + + .. code-block:: xml + + + + + PRODUCT + PROTOCOL + xxxxxxxx + xxxxxxxx + https://x.x.x.x:8088/deviceManager/rest/ + + + xxx + xxx + + xxx + + + x.x.x.x + + + + + + The corresponding ``Product`` values for each product are as below: + + + * **For T series V2** + + .. code-block:: xml + + TV2 + + * **For V3** + + .. code-block:: xml + + V3 + + * **For OceanStor 18000 series** + + .. code-block:: xml + + 18000 + + * **For OceanStor Dorado series** + + .. code-block:: xml + + Dorado + + The ``Protocol`` value to be used is ``iSCSI`` for iSCSI and ``FC`` for + Fibre Channel as shown below: + + .. code-block:: xml + + # For iSCSI + iSCSI + + # For Fibre channel + FC + + .. note:: + + For details about the parameters in the configuration file, see the + `Configuration file parameters`_ section. + +#. Configure the ``cinder.conf`` file. + + In the ``[default]`` block of ``/etc/cinder/cinder.conf``, + enable the ``VOLUME_BACKEND``: + + .. code-block:: ini + + enabled_backends = VOLUME_BACKEND + + + Add a new block ``[VOLUME_BACKEND]``, and add the following contents: + + .. code-block:: ini + + [VOLUME_BACKEND] + volume_driver = VOLUME_DRIVER + cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml + volume_backend_name = Huawei_Storage + + * ``volume_driver`` indicates the loaded driver. + + * ``cinder_huawei_conf_file`` indicates the specified Huawei-customized + configuration file. + + * ``volume_backend_name`` indicates the name of the backend. + + Add information about remote devices in ``/etc/cinder/cinder.conf`` + in target backend block for ``Hypermetro``. + + .. code-block:: ini + + [VOLUME_BACKEND] + volume_driver = VOLUME_DRIVER + cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml + volume_backend_name = Huawei_Storage + metro_san_user = xxx + metro_san_password = xxx + metro_domain_name = xxx + metro_san_address = https://x.x.x.x:8088/deviceManager/rest/ + metro_storage_pools = xxx + + Add information about remote devices in ``/etc/cinder/cinder.conf`` + in target backend block for ``Replication``. + + .. code-block:: ini + + [VOLUME_BACKEND] + volume_driver = VOLUME_DRIVER + cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml + volume_backend_name = Huawei_Storage + replication_device = + backend_id: xxx, + storage_pool :xxx, + san_address: https://x.x.x.x:8088/deviceManager/rest/, + san_user: xxx, + san_passowrd: xxx, + iscsi_default_target_ip: x.x.x.x + + .. note:: + + By default, the value for ``Hypermetro`` and ``Replication`` is + ``None``. For details about the parameters in the configuration file, + see the `Configuration file parameters`_ section. + + The ``volume-driver`` value for every product is as below: + + .. code-block:: ini + + # For iSCSI + volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver + + # For FC + volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver + +#. Run the :command:`service cinder-volume restart` command to restart the + Block Storage service. + +Configuring iSCSI Multipathing +------------------------------ + +To configure iSCSI Multipathing, follow the steps below: + +#. Add the port group settings in the Huawei-customized driver configuration + file and configure the port group name needed by an initiator. + + .. code-block:: xml + + + x.x.x.x + + + +#. Enable the multipathing switch of the Compute service module. + + Add ``volume_use_multipath = True`` in ``[libvirt]`` of + ``/etc/nova/nova.conf``. + +#. Run the :command:`service nova-compute restart` command to restart the + ``nova-compute`` service. + +Configuring FC Multipathing +------------------------------ + +To configure FC Multipathing, follow the steps below: + +#. Enable the multipathing switch of the Compute service module. + + Add ``volume_use_multipath = True`` in ``[libvirt]`` of + ``/etc/nova/nova.conf``. + +#. Run the :command:`service nova-compute restart` command to restart the + ``nova-compute`` service. + +Configuring CHAP and ALUA +------------------------- + +On a public network, any application server whose IP address resides on the +same network segment as that of the storage systems iSCSI host port can access +the storage system and perform read and write operations in it. This poses +risks to the data security of the storage system. To ensure the storage +systems access security, you can configure ``CHAP`` authentication to control +application servers access to the storage system. + +Adjust the driver configuration file as follows: + +.. code-block:: xml + + + +``ALUA`` indicates a multipathing mode. 0 indicates that ``ALUA`` is disabled. +1 indicates that ``ALUA`` is enabled. ``CHAPinfo`` indicates the user name and +password authenticated by ``CHAP``. The format is ``mmuser; mm-user@storage``. +The user name and password are separated by semicolons (``;``). + +Configuring multiple storage +---------------------------- + +Multiple storage systems configuration example: + +.. code-block:: ini + + enabled_backends = v3_fc, 18000_fc + [v3_fc] + volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver + cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_v3_fc.xml + volume_backend_name = huawei_v3_fc + [18000_fc] + volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver + cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_fc.xml + volume_backend_name = huawei_18000_fc + +Configuration file parameters +----------------------------- + +This section describes mandatory and optional configuration file parameters +of the Huawei volume driver. + +.. list-table:: **Mandatory parameters** + :widths: 10 10 50 10 + :header-rows: 1 + + * - Parameter + - Default value + - Description + - Applicable to + * - Product + - ``-`` + - Type of a storage product. Possible values are ``TV2``, ``18000`` and + ``V3``. + - All + * - Protocol + - ``-`` + - Type of a connection protocol. The possible value is either ``'iSCSI'`` + or ``'FC'``. + - All + * - RestURL + - ``-`` + - Access address of the REST interface, + ``https://x.x.x.x/devicemanager/rest/``. The value ``x.x.x.x`` indicates + the management IP address. OceanStor 18000 uses the preceding setting, + and V2 and V3 requires you to add port number ``8088``, for example, + ``https://x.x.x.x:8088/deviceManager/rest/``. If you need to configure + multiple RestURL, separate them by semicolons (;). + - All + * - UserName + - ``-`` + - User name of a storage administrator. + - All + * - UserPassword + - ``-`` + - Password of a storage administrator. + - All + * - StoragePool + - ``-`` + - Name of a storage pool to be used. If you need to configure multiple + storage pools, separate them by semicolons (``;``). + - All + +.. note:: + + The value of ``StoragePool`` cannot contain Chinese characters. + +.. list-table:: **Optional parameters** + :widths: 20 10 50 15 + :header-rows: 1 + + * - Parameter + - Default value + - Description + - Applicable to + * - LUNType + - Thick + - Type of the LUNs to be created. The value can be ``Thick`` or ``Thin``. Dorado series only support ``Thin`` LUNs. + - All + * - WriteType + - 1 + - Cache write type, possible values are: ``1`` (write back), ``2`` + (write through), and ``3`` (mandatory write back). + - All + * - LUNcopyWaitInterval + - 5 + - After LUN copy is enabled, the plug-in frequently queries the copy + progress. You can set a value to specify the query interval. + - All + * - Timeout + - 432000 + - Timeout interval for waiting LUN copy of a storage device to complete. + The unit is second. + - All + * - Initiator Name + - ``-`` + - Name of a compute node initiator. + - All + * - Initiator TargetIP + - ``-`` + - IP address of the iSCSI port provided for compute nodes. + - All + * - Initiator TargetPortGroup + - ``-`` + - IP address of the iSCSI target port that is provided for compute + nodes. + - All + * - DefaultTargetIP + - ``-`` + - Default IP address of the iSCSI target port that is provided for + compute nodes. + - All + * - OSType + - Linux + - Operating system of the Nova compute node's host. + - All + * - HostIP + - ``-`` + - IP address of the Nova compute node's host. + - All + * - metro_san_user + - ``-`` + - User name of a storage administrator of hypermetro remote device. + - V3R3/2600 V3R5/18000 V3R3 + * - metro_san_password + - ``-`` + - Password of a storage administrator of hypermetro remote device. + - V3R3/2600 V3R5/18000 V3R3 + * - metro_domain_name + - ``-`` + - Hypermetro domain name configured on ISM. + - V3R3/2600 V3R5/18000 V3R3 + * - metro_san_address + - ``-`` + - Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address. + - V3R3/2600 V3R5/18000 V3R3 + * - metro_storage_pools + - ``-`` + - Remote storage pool for hypermetro. + - V3R3/2600 V3R5/18000 V3R3 + * - backend_id + - ``-`` + - Target device ID. + - All + * - storage_pool + - ``-`` + - Pool name of target backend when failover for replication. + - All + * - san_address + - ``-`` + - Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address. + - All + * - san_user + - ``-`` + - User name of a storage administrator of replication remote device. + - All + * - san_password + - ``-`` + - Password of a storage administrator of replication remote device. + - All + * - iscsi_default_target_ip + - ``-`` + - Remote transacton port IP. + - All +.. important:: + + The ``Initiator Name``, ``Initiator TargetIP``, and + ``Initiator TargetPortGroup`` are ``ISCSI`` parameters and therefore not + applicable to ``FC``. diff --git a/doc/source/config-reference/block-storage/drivers/ibm-flashsystem-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/ibm-flashsystem-volume-driver.rst new file mode 100644 index 00000000000..a2c09686633 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/ibm-flashsystem-volume-driver.rst @@ -0,0 +1,242 @@ +============================= +IBM FlashSystem volume driver +============================= + +The volume driver for FlashSystem provides OpenStack Block Storage hosts +with access to IBM FlashSystems. + +Configure FlashSystem +~~~~~~~~~~~~~~~~~~~~~ + +Configure storage array +----------------------- + +The volume driver requires a pre-defined array. You must create an +array on the FlashSystem before using the volume driver. An existing array +can also be used and existing data will not be deleted. + +.. note:: + + FlashSystem can only create one array, so no configuration option is + needed for the IBM FlashSystem driver to assign it. + +Configure user authentication for the driver +-------------------------------------------- + +The driver requires access to the FlashSystem management interface using +SSH. It should be provided with the FlashSystem management IP using the +``san_ip`` flag, and the management port should be provided by the +``san_ssh_port`` flag. By default, the port value is configured to be +port 22 (SSH). + +.. note:: + + Make sure the compute node running the ``cinder-volume`` driver has SSH + network access to the storage system. + +Using password authentication, assign a password to the user on the +FlashSystem. For more detail, see the driver configuration flags +for the user and password here: :ref:`config_fc_flags` +or :ref:`config_iscsi_flags`. + +IBM FlashSystem FC driver +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Data Path configuration +----------------------- + +Using Fiber Channel (FC), each FlashSystem node should have at least one +WWPN port configured. If the ``flashsystem_multipath_enabled`` flag is +set to ``True`` in the Block Storage service configuration file, the driver +uses all available WWPNs to attach the volume to the instance. If the flag is +not set, the driver uses the WWPN associated with the volume's preferred node +(if available). Otherwise, it uses the first available WWPN of the system. The +driver obtains the WWPNs directly from the storage system. You do not need to +provide these WWPNs to the driver. + +.. note:: + + Using FC, ensure that the block storage hosts have FC connectivity + to the FlashSystem. + +.. _config_fc_flags: + +Enable IBM FlashSystem FC driver +-------------------------------- + +Set the volume driver to the FlashSystem driver by setting the +``volume_driver`` option in the ``cinder.conf`` configuration file, +as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.flashsystem_fc.FlashSystemFCDriver + +To enable the IBM FlashSystem FC driver, configure the following options in the +``cinder.conf`` configuration file: + +.. list-table:: List of configuration flags for IBM FlashSystem FC driver + :header-rows: 1 + + * - Flag name + - Type + - Default + - Description + * - ``san_ip`` + - Required + - + - Management IP or host name + * - ``san_ssh_port`` + - Optional + - 22 + - Management port + * - ``san_login`` + - Required + - + - Management login user name + * - ``san_password`` + - Required + - + - Management login password + * - ``flashsystem_connection_protocol`` + - Required + - + - Connection protocol should be set to ``FC`` + * - ``flashsystem_multipath_enabled`` + - Required + - + - Enable multipath for FC connections + * - ``flashsystem_multihost_enabled`` + - Optional + - ``True`` + - Enable mapping vdisks to multiple hosts [1]_ + +.. [1] + This option allows the driver to map a vdisk to more than one host at + a time. This scenario occurs during migration of a virtual machine + with an attached volume; the volume is simultaneously mapped to both + the source and destination compute hosts. If your deployment does not + require attaching vdisks to multiple hosts, setting this flag to + ``False`` will provide added safety. + +IBM FlashSystem iSCSI driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Network configuration +--------------------- + +Using iSCSI, each FlashSystem node should have at least one iSCSI port +configured. iSCSI IP addresses of IBM FlashSystem can be obtained by +FlashSystem GUI or CLI. For more information, see the +appropriate IBM Redbook for the FlashSystem. + +.. note:: + + Using iSCSI, ensure that the compute nodes have iSCSI network access + to the IBM FlashSystem. + +.. _config_iscsi_flags: + +Enable IBM FlashSystem iSCSI driver +----------------------------------- + +Set the volume driver to the FlashSystem driver by setting the +``volume_driver`` option in the ``cinder.conf`` configuration file, as +follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.flashsystem_iscsi.FlashSystemISCSIDriver + +To enable IBM FlashSystem iSCSI driver, configure the following options +in the ``cinder.conf`` configuration file: + + +.. list-table:: List of configuration flags for IBM FlashSystem iSCSI driver + :header-rows: 1 + + * - Flag name + - Type + - Default + - Description + * - ``san_ip`` + - Required + - + - Management IP or host name + * - ``san_ssh_port`` + - Optional + - 22 + - Management port + * - ``san_login`` + - Required + - + - Management login user name + * - ``san_password`` + - Required + - + - Management login password + * - ``flashsystem_connection_protocol`` + - Required + - + - Connection protocol should be set to ``iSCSI`` + * - ``flashsystem_multihost_enabled`` + - Optional + - ``True`` + - Enable mapping vdisks to multiple hosts [2]_ + * - ``iscsi_ip_address`` + - Required + - + - Set to one of the iSCSI IP addresses obtained by FlashSystem GUI or CLI [3]_ + * - ``flashsystem_iscsi_portid`` + - Required + - + - Set to the id of the ``iscsi_ip_address`` obtained by FlashSystem GUI or CLI [4]_ + +.. [2] + This option allows the driver to map a vdisk to more than one host at + a time. This scenario occurs during migration of a virtual machine + with an attached volume; the volume is simultaneously mapped to both + the source and destination compute hosts. If your deployment does not + require attaching vdisks to multiple hosts, setting this flag to + ``False`` will provide added safety. + +.. [3] + On the cluster of the FlashSystem, the ``iscsi_ip_address`` column is the + seventh column ``IP_address`` of the output of ``lsportip``. + +.. [4] + On the cluster of the FlashSystem, port ID column is the first + column ``id`` of the output of ``lsportip``, + not the sixth column ``port_id``. + +Limitations and known issues +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +IBM FlashSystem only works when: + +.. code-block:: ini + + open_access_enabled=off + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +These operations are supported: + +- Create, delete, attach, and detach volumes. + +- Create, list, and delete volume snapshots. + +- Create a volume from a snapshot. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Clone a volume. + +- Extend a volume. + +- Get volume statistics. + +- Manage and unmanage a volume. diff --git a/doc/source/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.rst new file mode 100644 index 00000000000..a1708616a6d --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.rst @@ -0,0 +1,228 @@ +================================ +IBM Spectrum Scale volume driver +================================ +IBM Spectrum Scale is a flexible software-defined storage that can be +deployed as high performance file storage or a cost optimized +large-scale content repository. IBM Spectrum Scale, previously known as +IBM General Parallel File System (GPFS), is designed to scale performance +and capacity with no bottlenecks. IBM Spectrum Scale is a cluster file system +that provides concurrent access to file systems from multiple nodes. The +storage provided by these nodes can be direct attached, network attached, +SAN attached, or a combination of these methods. Spectrum Scale provides +many features beyond common data access, including data replication, +policy based storage management, and space efficient file snapshot and +clone operations. + +How the Spectrum Scale volume driver works +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Spectrum Scale volume driver, named ``gpfs.py``, enables the use of +Spectrum Scale in a fashion similar to that of the NFS driver. With +the Spectrum Scale driver, instances do not actually access a storage +device at the block level. Instead, volume backing files are created +in a Spectrum Scale file system and mapped to instances, which emulate +a block device. + +.. note:: + + Spectrum Scale must be installed and cluster has to be created on the + storage nodes in the OpenStack environment. A file system must also be + created and mounted on these nodes before configuring the cinder service + to use Spectrum Scale storage.For more details, please refer to + `Spectrum Scale product documentation `_. + +Optionally, the Image service can be configured to store glance images +in a Spectrum Scale file system. When a Block Storage volume is created +from an image, if both image data and volume data reside in the same +Spectrum Scale file system, the data from image file is moved efficiently +to the volume file using copy-on-write optimization strategy. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ +- Create, delete, attach, and detach volumes. +- Create, delete volume snapshots. +- Create a volume from a snapshot. +- Create cloned volumes. +- Extend a volume. +- Migrate a volume. +- Retype a volume. +- Create, delete consistency groups. +- Create, delete consistency group snapshots. +- Copy an image to a volume. +- Copy a volume to an image. +- Backup and restore volumes. + +Driver configurations +~~~~~~~~~~~~~~~~~~~~~ + +The Spectrum Scale volume driver supports three modes of deployment. + +Mode 1 – Pervasive Spectrum Scale Client +---------------------------------------- + +When Spectrum Scale is running on compute nodes as well as on the cinder node. +For example, Spectrum Scale filesystem is available to both Compute and +Block Storage services as a local filesystem. + +To use Spectrum Scale driver in this deployment mode, set the ``volume_driver`` +in the ``cinder.conf`` as: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver + +The following table contains the configuration options supported by the +Spectrum Scale driver in this deployment mode. + +.. include:: ../../tables/cinder-ibm_gpfs.rst + +.. note:: + + The ``gpfs_images_share_mode`` flag is only valid if the Image + Service is configured to use Spectrum Scale with the + ``gpfs_images_dir`` flag. When the value of this flag is + ``copy_on_write``, the paths specified by the ``gpfs_mount_point_base`` + and ``gpfs_images_dir`` flags must both reside in the same GPFS + file system and in the same GPFS file set. + +Mode 2 – Remote Spectrum Scale Driver with Local Compute Access +--------------------------------------------------------------- + +When Spectrum Scale is running on compute nodes, but not on the Block Storage +node. For example, Spectrum Scale filesystem is only available to Compute +service as Local filesystem where as Block Storage service accesses Spectrum +Scale remotely. In this case, ``cinder-volume`` service running Spectrum Scale +driver access storage system over SSH and creates volume backing files to make +them available on the compute nodes. This mode is typically deployed when the +cinder and glance services are running inside a Linux container. The container +host should have Spectrum Scale client running and GPFS filesystem mount path +should be bind mounted into the Linux containers. + +.. note:: + + Note that the user IDs present in the containers should match as that in the + host machines. For example, the containers running cinder and glance + services should be priviledged containers. + +To use Spectrum Scale driver in this deployment mode, set the ``volume_driver`` +in the ``cinder.conf`` as: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSRemoteDriver + +The following table contains the configuration options supported by the +Spectrum Scale driver in this deployment mode. + +.. include:: ../../tables/cinder-ibm_gpfs_remote.rst + +.. note:: + + The ``gpfs_images_share_mode`` flag is only valid if the Image + Service is configured to use Spectrum Scale with the + ``gpfs_images_dir`` flag. When the value of this flag is + ``copy_on_write``, the paths specified by the ``gpfs_mount_point_base`` + and ``gpfs_images_dir`` flags must both reside in the same GPFS + file system and in the same GPFS file set. + +Mode 3 – Remote Spectrum Scale Access +------------------------------------- + +When both Compute and Block Storage nodes are not running Spectrum Scale +software and do not have access to Spectrum Scale file system directly as +local filesystem. In this case, we create an NFS export on the volume path +and make it available on the cinder node and on compute nodes. + +Optionally, if one wants to use the copy-on-write optimization to create +bootable volumes from glance images, one need to also export the glance +images path and mount it on the nodes where glance and cinder services +are running. The cinder and glance services will access the GPFS +filesystem through NFS. + +To use Spectrum Scale driver in this deployment mode, set the ``volume_driver`` +in the ``cinder.conf`` as: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSNFSDriver + +The following table contains the configuration options supported by the +Spectrum Scale driver in this deployment mode. + +.. include:: ../../tables/cinder-ibm_gpfs_nfs.rst + +Additionally, all the options of the base NFS driver are applicable +for GPFSNFSDriver. The above table lists the basic configuration +options which are needed for initialization of the driver. + +.. note:: + + The ``gpfs_images_share_mode`` flag is only valid if the Image + Service is configured to use Spectrum Scale with the + ``gpfs_images_dir`` flag. When the value of this flag is + ``copy_on_write``, the paths specified by the ``gpfs_mount_point_base`` + and ``gpfs_images_dir`` flags must both reside in the same GPFS + file system and in the same GPFS file set. + + +Volume creation options +~~~~~~~~~~~~~~~~~~~~~~~ + +It is possible to specify additional volume configuration options on a +per-volume basis by specifying volume metadata. The volume is created +using the specified options. Changing the metadata after the volume is +created has no effect. The following table lists the volume creation +options supported by the GPFS volume driver. + +.. list-table:: **Volume Create Options for Spectrum Scale Volume Drivers** + :widths: 10 25 + :header-rows: 1 + + * - Metadata Item Name + - Description + * - fstype + - Specifies whether to create a file system or a swap area on the new volume. If fstype=swap is specified, the mkswap command is used to create a swap area. Otherwise the mkfs command is passed the specified file system type, for example ext3, ext4 or ntfs. + * - fslabel + - Sets the file system label for the file system specified by fstype option. This value is only used if fstype is specified. + * - data_pool_name + - Specifies the GPFS storage pool to which the volume is to be assigned. Note: The GPFS storage pool must already have been created. + * - replicas + - Specifies how many copies of the volume file to create. Valid values are 1, 2, and, for Spectrum Scale V3.5.0.7 and later, 3. This value cannot be greater than the value of the MaxDataReplicasattribute of the file system. + * - dio + - Enables or disables the Direct I/O caching policy for the volume file. Valid values are yes and no. + * - write_affinity_depth + - Specifies the allocation policy to be used for the volume file. Note: This option only works if allow-write-affinity is set for the GPFS data pool. + * - block_group_factor + - Specifies how many blocks are laid out sequentially in the volume file to behave as a single large block. Note: This option only works if allow-write-affinity is set for the GPFS data pool. + * - write_affinity_failure_group + - Specifies the range of nodes (in GPFS shared nothing architecture) where replicas of blocks in the volume file are to be written. See Spectrum Scale documentation for more details about this option. + +This example shows the creation of a 50GB volume with an ``ext4`` file +system labeled ``newfs`` and direct IO enabled: + +.. code-block:: console + + $ openstack volume create --property fstype=ext4 fslabel=newfs dio=yes \ + --size 50 VOLUME + +Note that if the metadata for the volume is changed later, the changes +do not reflect in the backend. User will have to manually change the +volume attributes corresponding to metadata on Spectrum Scale filesystem. + +Operational notes for GPFS driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Volume snapshots are implemented using the GPFS file clone feature. +Whenever a new snapshot is created, the snapshot file is efficiently +created as a read-only clone parent of the volume, and the volume file +uses copy-on-write optimization strategy to minimize data movement. + +Similarly when a new volume is created from a snapshot or from an +existing volume, the same approach is taken. The same approach is also +used when a new volume is created from an Image service image, if the +source image is in raw format, and ``gpfs_images_share_mode`` is set to +``copy_on_write``. + +The Spectrum Scale driver supports encrypted volume back end feature. +To encrypt a volume at rest, specify the extra specification +``gpfs_encryption_rest = True``. diff --git a/doc/source/config-reference/block-storage/drivers/ibm-storage-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/ibm-storage-volume-driver.rst new file mode 100644 index 00000000000..fe188e84319 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/ibm-storage-volume-driver.rst @@ -0,0 +1,172 @@ +================================ +IBM Storage Driver for OpenStack +================================ + +Introduction +~~~~~~~~~~~~ +The IBM Storage Driver for OpenStack is a software component of the +OpenStack cloud environment that enables utilization of storage +resources provided by supported IBM storage systems. + +The driver was validated on the following storage systems: + +* IBM DS8000 Family +* IBM FlashSystem A9000 +* IBM FlashSystem A9000R +* IBM Spectrum Accelerate +* IBM XIV Storage System + +After the driver is configured on the OpenStack cinder nodes, storage volumes +can be allocated by the cinder nodes to the nova nodes. Virtual machines on +the nova nodes can then utilize these storage resources. + +.. note:: + Unless stated otherwise, all references to XIV storage + system in this guide relate all members of the Spectrum Accelerate + Family (XIV, Spectrum Accelerate and FlashSystem A9000/A9000R). + +Concept diagram +--------------- +This figure illustrates how an IBM storage system is connected +to the OpenStack cloud environment and provides storage resources when the +IBM Storage Driver for OpenStack is configured on the OpenStack cinder nodes. +The OpenStack cloud is connected to the IBM storage system over Fibre +Channel or iSCSI (DS8000 systems support only Fibre Channel connections). +Remote cloud users can issue requests for storage resources from the +OpenStack cloud. These requests are transparently handled by the IBM Storage +Driver, which communicates with the IBM storage system and controls the +storage volumes on it. The IBM storage resources are then provided to the +nova nodes in the OpenStack cloud. + +.. figure:: ../../figures/ibm-storage-nova-concept.png + +Configuration +~~~~~~~~~~~~~ + +Configure the driver manually by changing the ``cinder.conf`` file as +follows: + + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.ibm.ibm_storage.IBMStorageDriver + +.. include:: ../../tables/cinder-ibm_storage.rst + + + +Security +~~~~~~~~ + +The following information provides an overview of security for the IBM +Storage Driver for OpenStack. + +Avoiding man-in-the-middle attacks +---------------------------------- + +When using a Spectrum Accelerate Family product, you can prevent +man-in-the-middle (MITM) attacks by following these rules: + +* Upgrade to IBM XIV storage system version 11.3 or later. + +* If working in a secure mode, do not work insecurely against another storage + system in the same environment. + +* Validate the storage certificate. If you are using an XIV-provided + certificate, use the CA file that was provided with your storage system + (``XIV-CA.pem``). The certificate files should be copied to one of the + following directories: + + * ``/etc/ssl/certs`` + * ``/etc/ssl/certs/xiv`` + * ``/etc/pki`` + * ``/etc/pki/xiv`` + + If you are using your own certificates, copy them to the same directories + with the prefix ``XIV`` and in the ``.pem`` format. + For example: ``XIV-my_cert.pem``. + +* To prevent the CVE-2014-3566 MITM attack, follow the OpenStack + community + `directions `_. + +Troubleshooting +~~~~~~~~~~~~~~~ + +Refer to this information to troubleshoot technical problems that you +might encounter when using the IBM Storage Driver for OpenStack. + +Checking the cinder node log files +---------------------------------- + +The cinder log files record operation information that might be useful +for troubleshooting. + +To achieve optimal and clear logging of events, activate the verbose +logging level in the ``cinder.conf`` file, located in the ``/etc/cinder`` +folder. Add the following line in the file, save the file, and then +restart the ``cinder-volume`` service: + +.. code-block:: console + + verbose = True + debug = True + +To turn off the verbose logging level, change ``True`` to ``False``, +save the file, and then restart the ``cinder-volume`` service. + +Check the log files on a periodic basis to ensure that the IBM +Storage Driver is functioning properly: + +#. Log into the cinder node. +#. Go to the ``/var/log/cinder`` folder +#. Open the activity log file named ``cinder-volume.log`` or ``volume.log``. + The IBM Storage Driver writes to this log file using the + ``[IBM DS8K STORAGE]`` or ``[IBM XIV STORAGE]`` prefix (depending on + the relevant storage system) for each event that it records in the file. + + +Best practices +~~~~~~~~~~~~~~ + +This section contains the general guidance and best practices. + +Working with multi-tenancy +-------------------------- +The XIV storage systems, running microcode version 11.5 or later, Spectrum +Accelerate and FlashSystem A9000/A9000R can employ multi-tenancy. + +In order to use multi-tenancy with the IBM Storage Driver for OpenStack: + +* For each storage system, verify that all predefined storage pools are + in the same domain or, that all are not in a domain. + +* Use either storage administrator or domain administrator user's + credentials, as long as the credentials grant a full access to the relevant + pool. +* If the user is a domain administrator, the storage system domain + access policy can be ``CLOSED`` (``domain_policy: access=CLOSED``). + Otherwise, verify that the storage system domain access policy is + ``OPEN`` (``domain_policy: access=OPEN``). +* If the user is not a domain administrator, the host management policy + of the storage system domain can be ``BASIC`` (``domain_policy: + host_management=BASIC``). Otherwise, verify that the storage + system domain host management policy is ``EXTENDED`` + (``domain_policy: host_management=EXTENDED``). + +Working with IBM Real-time Compression™ +--------------------------------------- +XIV storage systems running microcode version 11.6 or later, +Spectrum Accelerate and FlashSystem A9000/A9000R can employ IBM +Real-time Compression™. + +Follow these guidelines when working with compressed storage +resources using the IBM Storage Driver for OpenStack: + +* Compression mode cannot be changed for storage volumes, using + the IBM Storage Driver for OpenStack. The volumes are created + according to the default compression mode of the pool. For example, + any volume created in a compressed pool will be compressed as well. + +* The minimum size for a compressed storage volume is 87 GB. + diff --git a/doc/source/config-reference/block-storage/drivers/ibm-storwize-svc-driver.rst b/doc/source/config-reference/block-storage/drivers/ibm-storwize-svc-driver.rst new file mode 100644 index 00000000000..8c65c67c6a2 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/ibm-storwize-svc-driver.rst @@ -0,0 +1,499 @@ +========================================= +IBM Storwize family and SVC volume driver +========================================= + +The volume management driver for Storwize family and SAN Volume +Controller (SVC) provides OpenStack Compute instances with access to IBM +Storwize family or SVC storage systems. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +Storwize/SVC driver supports the following Block Storage service volume +operations: + +- Create, list, delete, attach (map), and detach (unmap) volumes. +- Create, list, and delete volume snapshots. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Retype a volume. +- Create a volume from a snapshot. +- Create, list, and delete consistency group. +- Create, list, and delete consistency group snapshot. +- Modify consistency group (add or remove volumes). +- Create consistency group from source (source can be a CG or CG snapshot) +- Manage an existing volume. +- Failover-host for replicated back ends. +- Failback-host for replicated back ends. + +Configure the Storwize family and SVC system +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Network configuration +--------------------- + +The Storwize family or SVC system must be configured for iSCSI, Fibre +Channel, or both. + +If using iSCSI, each Storwize family or SVC node should have at least +one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP +address associated with the volume's preferred node (if available) to +attach the volume to the instance, otherwise it uses the first available +iSCSI IP address of the system. The driver obtains the iSCSI IP address +directly from the storage system. You do not need to provide these iSCSI +IP addresses directly to the driver. + +.. note:: + + If using iSCSI, ensure that the compute nodes have iSCSI network + access to the Storwize family or SVC system. + +If using Fibre Channel (FC), each Storwize family or SVC node should +have at least one WWPN port configured. The driver uses all available +WWPNs to attach the volume to the instance. The driver obtains the +WWPNs directly from the storage system. You do not need to provide +these WWPNs directly to the driver. + +.. note:: + + If using FC, ensure that the compute nodes have FC connectivity to + the Storwize family or SVC system. + +iSCSI CHAP authentication +------------------------- + +If using iSCSI for data access and the +``storwize_svc_iscsi_chap_enabled`` is set to ``True``, the driver will +associate randomly-generated CHAP secrets with all hosts on the Storwize +family system. The compute nodes use these secrets when creating +iSCSI connections. + +.. warning:: + + CHAP secrets are added to existing hosts as well as newly-created + ones. If the CHAP option is enabled, hosts will not be able to + access the storage without the generated secrets. + +.. note:: + + Not all OpenStack Compute drivers support CHAP authentication. + Please check compatibility before using. + +.. note:: + + CHAP secrets are passed from OpenStack Block Storage to Compute in + clear text. This communication should be secured to ensure that CHAP + secrets are not discovered. + +Configure storage pools +----------------------- + +The IBM Storwize/SVC driver can allocate volumes in multiple pools. +The pools should be created in advance and be provided to the driver +using the ``storwize_svc_volpool_name`` configuration flag in the form +of a comma-separated list. +For the complete list of configuration flags, see :ref:`config_flags`. + +Configure user authentication for the driver +-------------------------------------------- + +The driver requires access to the Storwize family or SVC system +management interface. The driver communicates with the management using +SSH. The driver should be provided with the Storwize family or SVC +management IP using the ``san_ip`` flag, and the management port should +be provided by the ``san_ssh_port`` flag. By default, the port value is +configured to be port 22 (SSH). Also, you can set the secondary +management IP using the ``storwize_san_secondary_ip`` flag. + +.. note:: + + Make sure the compute node running the cinder-volume management + driver has SSH network access to the storage system. + +To allow the driver to communicate with the Storwize family or SVC +system, you must provide the driver with a user on the storage system. +The driver has two authentication methods: password-based authentication +and SSH key pair authentication. The user should have an Administrator +role. It is suggested to create a new user for the management driver. +Please consult with your storage and security administrator regarding +the preferred authentication method and how passwords or SSH keys should +be stored in a secure manner. + +.. note:: + + When creating a new user on the Storwize or SVC system, make sure + the user belongs to the Administrator group or to another group that + has an Administrator role. + +If using password authentication, assign a password to the user on the +Storwize or SVC system. The driver configuration flags for the user and +password are ``san_login`` and ``san_password``, respectively. + +If you are using the SSH key pair authentication, create SSH private and +public keys using the instructions below or by any other method. +Associate the public key with the user by uploading the public key: +select the :guilabel:`choose file` option in the Storwize family or SVC +management GUI under :guilabel:`SSH public key`. Alternatively, you may +associate the SSH public key using the command-line interface; details can +be found in the Storwize and SVC documentation. The private key should be +provided to the driver using the ``san_private_key`` configuration flag. + +Create a SSH key pair with OpenSSH +---------------------------------- + +You can create an SSH key pair using OpenSSH, by running: + +.. code-block:: console + + $ ssh-keygen -t rsa + +The command prompts for a file to save the key pair. For example, if you +select ``key`` as the filename, two files are created: ``key`` and +``key.pub``. The ``key`` file holds the private SSH key and ``key.pub`` +holds the public SSH key. + +The command also prompts for a pass phrase, which should be empty. + +The private key file should be provided to the driver using the +``san_private_key`` configuration flag. The public key should be +uploaded to the Storwize family or SVC system using the storage +management GUI or command-line interface. + +.. note:: + + Ensure that Cinder has read permissions on the private key file. + +Configure the Storwize family and SVC driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enable the Storwize family and SVC driver +----------------------------------------- + +Set the volume driver to the Storwize family and SVC driver by setting +the ``volume_driver`` option in the ``cinder.conf`` file as follows: + +iSCSI: + +.. code-block:: ini + + [svc1234] + volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver + san_ip = 1.2.3.4 + san_login = superuser + san_password = passw0rd + storwize_svc_volpool_name = cinder_pool1 + volume_backend_name = svc1234 + +FC: + +.. code-block:: ini + + [svc1234] + volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver + san_ip = 1.2.3.4 + san_login = superuser + san_password = passw0rd + storwize_svc_volpool_name = cinder_pool1 + volume_backend_name = svc1234 + +Replication configuration +------------------------- + +Add the following to the back-end specification to specify another storage +to replicate to: + +.. code-block:: ini + + replication_device = backend_id:rep_svc, + san_ip:1.2.3.5, + san_login:superuser, + san_password:passw0rd, + pool_name:cinder_pool1 + +The ``backend_id`` is a unique name of the remote storage, the ``san_ip``, +``san_login``, and ``san_password`` is authentication information for the +remote storage. The ``pool_name`` is the pool name for the replication +target volume. + +.. note:: + + Only one ``replication_device`` can be configured for one back end + storage since only one replication target is supported now. + +.. _config_flags: + +Storwize family and SVC driver options in cinder.conf +----------------------------------------------------- + +The following options specify default values for all volumes. Some can +be over-ridden using volume types, which are described below. + +.. include:: ../../tables/cinder-storwize.rst + +Note the following: + +* The authentication requires either a password (``san_password``) or + SSH private key (``san_private_key``). One must be specified. If + both are specified, the driver uses only the SSH private key. + +* The driver creates thin-provisioned volumes by default. The + ``storwize_svc_vol_rsize`` flag defines the initial physical + allocation percentage for thin-provisioned volumes, or if set to + ``-1``, the driver creates full allocated volumes. More details about + the available options are available in the Storwize family and SVC + documentation. + + +Placement with volume types +--------------------------- + +The IBM Storwize/SVC driver exposes capabilities that can be added to +the ``extra specs`` of volume types, and used by the filter +scheduler to determine placement of new volumes. Make sure to prefix +these keys with ``capabilities:`` to indicate that the scheduler should +use them. The following ``extra specs`` are supported: + +- ``capabilities:volume_back-end_name`` - Specify a specific back-end + where the volume should be created. The back-end name is a + concatenation of the name of the IBM Storwize/SVC storage system as + shown in ``lssystem``, an underscore, and the name of the pool (mdisk + group). For example: + + .. code-block:: ini + + capabilities:volume_back-end_name=myV7000_openstackpool + +- ``capabilities:compression_support`` - Specify a back-end according to + compression support. A value of ``True`` should be used to request a + back-end that supports compression, and a value of ``False`` will + request a back-end that does not support compression. If you do not + have constraints on compression support, do not set this key. Note + that specifying ``True`` does not enable compression; it only + requests that the volume be placed on a back-end that supports + compression. Example syntax: + + .. code-block:: ini + + capabilities:compression_support=' True' + +- ``capabilities:easytier_support`` - Similar semantics as the + ``compression_support`` key, but for specifying according to support + of the Easy Tier feature. Example syntax: + + .. code-block:: ini + + capabilities:easytier_support=' True' + +- ``capabilities:storage_protocol`` - Specifies the connection protocol + used to attach volumes of this type to instances. Legal values are + ``iSCSI`` and ``FC``. This ``extra specs`` value is used for both placement + and setting the protocol used for this volume. In the example syntax, + note ```` is used as opposed to ```` which is used in the + previous examples. + + .. code-block:: ini + + capabilities:storage_protocol=' FC' + +Configure per-volume creation options +------------------------------------- + +Volume types can also be used to pass options to the IBM Storwize/SVC +driver, which over-ride the default values set in the configuration +file. Contrary to the previous examples where the ``capabilities`` scope +was used to pass parameters to the Cinder scheduler, options can be +passed to the IBM Storwize/SVC driver with the ``drivers`` scope. + +The following ``extra specs`` keys are supported by the IBM Storwize/SVC +driver: + +- rsize +- warning +- autoexpand +- grainsize +- compression +- easytier +- multipath +- iogrp + +These keys have the same semantics as their counterparts in the +configuration file. They are set similarly; for example, ``rsize=2`` or +``compression=False``. + +Example: Volume types +--------------------- + +In the following example, we create a volume type to specify a +controller that supports iSCSI and compression, to use iSCSI when +attaching the volume, and to enable compression: + +.. code-block:: console + + $ openstack volume type create compressed + $ openstack volume type set --property capabilities:storage_protocol=' iSCSI' capabilities:compression_support=' True' drivers:compression=True + +We can then create a 50GB volume using this type: + +.. code-block:: console + + $ openstack volume create "compressed volume" --type compressed --size 50 + +In the following example, create a volume type that enables +synchronous replication (metro mirror): + +.. code-block:: console + + $ openstack volume type create ReplicationType + $ openstack volume type set --property replication_type=" metro" \ + --property replication_enabled=' True' --property volume_backend_name=svc234 ReplicationType + +Volume types can be used, for example, to provide users with different + +- performance levels (such as, allocating entirely on an HDD tier, + using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD + tier) + +- resiliency levels (such as, allocating volumes in pools with + different RAID levels) + +- features (such as, enabling/disabling Real-time Compression, + replication volume creation) + +QOS +--- + +The Storwize driver provides QOS support for storage volumes by +controlling the I/O amount. QOS is enabled by editing the +``etc/cinder/cinder.conf`` file and setting the +``storwize_svc_allow_tenant_qos`` to ``True``. + +There are three ways to set the Storwize ``IOThrotting`` parameter for +storage volumes: + +- Add the ``qos:IOThrottling`` key into a QOS specification and + associate it with a volume type. + +- Add the ``qos:IOThrottling`` key into an extra specification with a + volume type. + +- Add the ``qos:IOThrottling`` key to the storage volume metadata. + +.. note:: + + If you are changing a volume type with QOS to a new volume type + without QOS, the QOS configuration settings will be removed. + +Operational notes for the Storwize family and SVC driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Migrate volumes +--------------- + +In the context of OpenStack Block Storage's volume migration feature, +the IBM Storwize/SVC driver enables the storage's virtualization +technology. When migrating a volume from one pool to another, the volume +will appear in the destination pool almost immediately, while the +storage moves the data in the background. + +.. note:: + + To enable this feature, both pools involved in a given volume + migration must have the same values for ``extent_size``. If the + pools have different values for ``extent_size``, the data will still + be moved directly between the pools (not host-side copy), but the + operation will be synchronous. + +Extend volumes +-------------- + +The IBM Storwize/SVC driver allows for extending a volume's size, but +only for volumes without snapshots. + +Snapshots and clones +-------------------- + +Snapshots are implemented using FlashCopy with no background copy +(space-efficient). Volume clones (volumes created from existing volumes) +are implemented with FlashCopy, but with background copy enabled. This +means that volume clones are independent, full copies. While this +background copy is taking place, attempting to delete or extend the +source volume will result in that operation waiting for the copy to +complete. + +Volume retype +------------- + +The IBM Storwize/SVC driver enables you to modify volume types. When you +modify volume types, you can also change these extra specs properties: + +- rsize + +- warning + +- autoexpand + +- grainsize + +- compression + +- easytier + +- iogrp + +- nofmtdisk + +.. note:: + + When you change the ``rsize``, ``grainsize`` or ``compression`` + properties, volume copies are asynchronously synchronized on the + array. + +.. note:: + + To change the ``iogrp`` property, IBM Storwize/SVC firmware version + 6.4.0 or later is required. + +Replication operation +--------------------- + +A volume is only replicated if the volume is created with a volume-type +that has the extra spec ``replication_enabled`` set to `` True``. Two +types of replication are supported now, async (global mirror) and +sync (metro mirror). It can be specified by a volume-type that has the +extra spec ``replication_type`` set to `` global`` or +``replication_type`` set to `` metro``. If no ``replication_type`` is +specified, global mirror will be created for replication. + +.. note:: + + It is better to establish the partnership relationship between + the replication source storage and the replication target + storage manually on the storage back end before replication + volume creation. + +The ``failover-host`` command is designed for the case where the primary +storage is down. + +.. code-block:: console + + $ cinder failover-host cinder@svciscsi --backend_id target_svc_id + +If a failover command has been executed and the primary storage has +been restored, it is possible to do a failback by simply specifying +default as the ``backend_id``: + +.. code-block:: console + + $ cinder failover-host cinder@svciscsi --backend_id default + +.. note:: + + Before you perform a failback operation, synchronize the data + from the replication target volume to the primary one on the + storage back end manually, and do the failback only after the + synchronization is done since the synchronization may take a long time. + If the synchronization is not done manually, Storwize Block Storage + service driver will perform the synchronization and do the failback + after the synchronization is finished. diff --git a/doc/source/config-reference/block-storage/drivers/infinidat-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/infinidat-volume-driver.rst new file mode 100644 index 00000000000..6051b920b28 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/infinidat-volume-driver.rst @@ -0,0 +1,182 @@ +======================================== +INFINIDAT InfiniBox Block Storage driver +======================================== + +The INFINIDAT Block Storage volume driver provides iSCSI and Fibre Channel +support for INFINIDAT InfiniBox storage systems. + +This section explains how to configure the INFINIDAT driver. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. +* Create, list, and delete volume snapshots. +* Create a volume from a snapshot. +* Copy a volume to an image. +* Copy an image to a volume. +* Clone a volume. +* Extend a volume. +* Get volume statistics. +* Create, modify, delete, and list consistency groups. +* Create, modify, delete, and list snapshots of consistency groups. +* Create consistency group from consistency group or consistency group + snapshot. + +External package installation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The driver requires the ``infinisdk`` package for communicating with +InfiniBox systems. Install the package from PyPI using the following command: + +.. code-block:: console + + $ pip install infinisdk + +Setting up the storage array +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create a storage pool object on the InfiniBox array in advance. +The storage pool will contain volumes managed by OpenStack. +Refer to the InfiniBox manuals for details on pool management. + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +Edit the ``cinder.conf`` file, which is usually located under the following +path ``/etc/cinder/cinder.conf``. + +* Add a section for the INFINIDAT driver back end. + +* Under the ``[DEFAULT]`` section, set the ``enabled_backends`` parameter with + the name of the new back-end section. + +Configure the driver back-end section with the parameters below. + +* Configure the driver name by setting the following parameter: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.infinidat.InfiniboxVolumeDriver + +* Configure the management IP of the InfiniBox array by adding the following + parameter: + + .. code-block:: ini + + san_ip = InfiniBox management IP + +* Configure user credentials. + + The driver requires an InfiniBox user with administrative privileges. + We recommend creating a dedicated OpenStack user account + that holds an administrative user role. + Refer to the InfiniBox manuals for details on user account management. + Configure the user credentials by adding the following parameters: + + .. code-block:: ini + + san_login = infinibox_username + san_password = infinibox_password + +* Configure the name of the InfiniBox pool by adding the following parameter: + + .. code-block:: ini + + infinidat_pool_name = Pool defined in InfiniBox + +* The back-end name is an identifier for the back end. + We recommend using the same name as the name of the section. + Configure the back-end name by adding the following parameter: + + .. code-block:: ini + + volume_backend_name = back-end name + +* Thin provisioning. + + The INFINIDAT driver supports creating thin or thick provisioned volumes. + Configure thin or thick provisioning by adding the following parameter: + + .. code-block:: ini + + san_thin_provision = true/false + + This parameter defaults to ``true``. + +* Configure the connectivity protocol. + + The InfiniBox driver supports connection to the InfiniBox system in both + the fibre channel and iSCSI protocols. + Configure the desired protocol by adding the following parameter: + + .. code-block:: ini + + infinidat_storage_protocol = iscsi/fc + + This parameter defaults to ``fc``. + +* Configure iSCSI netspaces. + + When using the iSCSI protocol to connect to InfiniBox systems, you must + configure one or more iSCSI network spaces in the InfiniBox storage array. + Refer to the InfiniBox manuals for details on network space management. + Configure the names of the iSCSI network spaces to connect to by adding + the following parameter: + + .. code-block:: ini + + infinidat_iscsi_netspaces = iscsi_netspace + + Multiple network spaces can be specified by a comma separated string. + + This parameter is ignored when using the FC protocol. + +* Configure CHAP + + InfiniBox supports CHAP authentication when using the iSCSI protocol. To + enable CHAP authentication, add the following parameter: + + .. code-block:: ini + + use_chap_auth = true + + To manually define the username and password, add the following parameters: + + .. code-block:: ini + + chap_username = username + chap_password = password + + If the CHAP username or password are not defined, they will be + auto-generated by the driver. + + The CHAP parameters are ignored when using the FC protocol. + + +Configuration example +~~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: ini + + [DEFAULT] + enabled_backends = infinidat-pool-a + + [infinidat-pool-a] + volume_driver = cinder.volume.drivers.infinidat.InfiniboxVolumeDriver + volume_backend_name = infinidat-pool-a + san_ip = 10.1.2.3 + san_login = openstackuser + san_password = openstackpass + san_thin_provision = true + infinidat_pool_name = pool-a + infinidat_storage_protocol = iscsi + infinidat_iscsi_netspaces = default_iscsi_space + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific +to the INFINIDAT driver. + +.. include:: ../../tables/cinder-infinidat.rst diff --git a/doc/source/config-reference/block-storage/drivers/infortrend-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/infortrend-volume-driver.rst new file mode 100644 index 00000000000..5a5a66b3bf6 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/infortrend-volume-driver.rst @@ -0,0 +1,130 @@ +======================== +Infortrend volume driver +======================== + +The `Infortrend `__ volume driver is a Block Storage driver +providing iSCSI and Fibre Channel support for Infortrend storages. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The Infortrend volume driver supports the following volume operations: + +* Create, delete, attach, and detach volumes. +* Create and delete a snapshot. +* Create a volume from a snapshot. +* Copy an image to a volume. +* Copy a volume to an image. +* Clone a volume. +* Extend a volume +* Retype a volume. +* Manage and unmanage a volume. +* Migrate a volume with back-end assistance. +* Live migrate an instance with volumes hosted on an Infortrend backend. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the Infortrend volume driver, the following settings are required: + +Set up Infortrend storage +------------------------- + +* Create logical volumes in advance. +* Host side setting ``Peripheral device type`` should be + ``No Device Present (Type=0x7f)``. + +Set up cinder-volume node +------------------------- + +* Install Oracle Java 7 or later. + +* Download the Infortrend storage CLI from the + `release page `__, + and assign it to the default path ``/opt/bin/Infortrend/``. + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +On ``cinder-volume`` nodes, set the following in your +``/etc/cinder/cinder.conf``, and use the following options to configure it: + +Driver options +-------------- + +.. include:: ../../tables/cinder-infortrend.rst + +iSCSI configuration example +--------------------------- + +.. code-block:: ini + + [DEFAULT] + default_volume_type = IFT-ISCSI + enabled_backends = IFT-ISCSI + + [IFT-ISCSI] + volume_driver = cinder.volume.drivers.infortrend.infortrend_iscsi_cli.InfortrendCLIISCSIDriver + volume_backend_name = IFT-ISCSI + infortrend_pools_name = POOL-1,POOL-2 + san_ip = MANAGEMENT_PORT_IP + infortrend_slots_a_channels_id = 0,1,2,3 + infortrend_slots_b_channels_id = 0,1,2,3 + +Fibre Channel configuration example +----------------------------------- + +.. code-block:: ini + + [DEFAULT] + default_volume_type = IFT-FC + enabled_backends = IFT-FC + + [IFT-FC] + volume_driver = cinder.volume.drivers.infortrend.infortrend_fc_cli.InfortrendCLIFCDriver + volume_backend_name = IFT-FC + infortrend_pools_name = POOL-1,POOL-2,POOL-3 + san_ip = MANAGEMENT_PORT_IP + infortrend_slots_a_channels_id = 4,5 + +Multipath configuration +----------------------- + +* Enable multipath for image transfer in ``/etc/cinder/cinder.conf``. + + .. code-block:: ini + + use_multipath_for_image_xfer = True + + Restart the ``cinder-volume`` service. + +* Enable multipath for volume attach and detach in ``/etc/nova/nova.conf``. + + .. code-block:: ini + + [libvirt] + ... + volume_use_multipath = True + ... + + Restart the ``nova-compute`` service. + +Extra spec usage +---------------- + +* ``infortrend:provisioning`` - Defaults to ``full`` provisioning, + the valid values are thin and full. + +* ``infortrend:tiering`` - Defaults to use ``all`` tiering, + the valid values are subsets of 0, 1, 2, 3. + + If multi-pools are configured in ``cinder.conf``, + it can be specified for each pool, separated by semicolon. + + For example: + + ``infortrend:provisioning``: ``POOL-1:thin; POOL-2:full`` + + ``infortrend:tiering``: ``POOL-1:all; POOL-2:0; POOL-3:0,1,3`` + +For more details, see `Infortrend documents `_. diff --git a/doc/source/config-reference/block-storage/drivers/itri-disco-driver.rst b/doc/source/config-reference/block-storage/drivers/itri-disco-driver.rst new file mode 100644 index 00000000000..f3fe66b533c --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/itri-disco-driver.rst @@ -0,0 +1,24 @@ +======================== +ITRI DISCO volume driver +======================== + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The DISCO driver supports the following features: + +* Volume create and delete +* Volume attach and detach +* Snapshot create and delete +* Create volume from snapshot +* Get volume stats +* Copy image to volume +* Copy volume to image +* Clone volume +* Extend volume +* Manage and unmanage volume + +Configuration options +~~~~~~~~~~~~~~~~~~~~~ + +.. include:: ../../tables/cinder-disco.rst diff --git a/doc/source/config-reference/block-storage/drivers/kaminario-driver.rst b/doc/source/config-reference/block-storage/drivers/kaminario-driver.rst new file mode 100644 index 00000000000..160436db186 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/kaminario-driver.rst @@ -0,0 +1,273 @@ +======================================================== +Kaminario K2 all-flash array iSCSI and FC volume drivers +======================================================== + +Kaminario's K2 all-flash array leverages a unique software-defined +architecture that delivers highly valued predictable performance, scalability +and cost-efficiency. + +Kaminario's K2 all-flash iSCSI and FC arrays can be used in +OpenStack Block Storage for providing block storage using +``KaminarioISCSIDriver`` class and ``KaminarioFCDriver`` class respectively. + +This documentation explains how to configure and connect the block storage +nodes to one or more K2 all-flash arrays. + +Driver requirements +~~~~~~~~~~~~~~~~~~~ + +- Kaminario's K2 all-flash iSCSI and/or FC array + +- K2 REST API version >= 2.2.0 + +- K2 version 5.8 or later are supported + +- ``krest`` python library(version 1.3.1 or later) should be installed on the + Block Storage node using :command:`sudo pip install krest` + +- The Block Storage Node should also have a data path to the K2 array + for the following operations: + + - Create a volume from snapshot + - Clone a volume + - Copy volume to image + - Copy image to volume + - Retype 'dedup without replication'<->'nodedup without replication' + +Supported operations +~~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Retype a volume. +- Manage and unmanage a volume. +- Replicate volume with failover and failback support to K2 array. + +Limitations and known issues +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If your OpenStack deployment is not setup to use multipath, the network +connectivity of the K2 all-flash array will use a single physical port. + +This may significantly limit the following benefits provided by K2: + +- available bandwidth +- high-availability +- non disruptive-upgrade + +The following steps are required to setup multipath access on the +Compute and the Block Storage nodes + +#. Install multipath software on both Compute and Block Storage nodes. + + For example: + + .. code-block:: console + + # apt-get install sg3-utils multipath-tools + +#. In the ``[libvirt]`` section of the ``nova.conf`` configuration file, + specify ``iscsi_use_multipath=True``. This option is valid for both iSCSI + and FC drivers. + + Additional resources: Kaminario Host Configuration Guide + for Linux (for configuring multipath) + +#. Restart the compute service for the changes to take effect. + + .. code-block:: console + + # service nova-compute restart + + +Configure single Kaminario iSCSI/FC back end +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section details the steps required to configure the Kaminario +Cinder Driver for single FC or iSCSI backend. + +#. In the ``cinder.conf`` configuration file under the ``[DEFAULT]`` + section, set the ``scheduler_default_filters`` parameter: + + .. code-block:: ini + + [DEFAULT] + scheduler_default_filters = DriverFilter,CapabilitiesFilter + + See following links for more information: + ``_ + ``_ + +#. Under the ``[DEFAULT]`` section, set the enabled_backends parameter + with the iSCSI or FC back-end group + + .. code-block:: ini + + [DEFAULT] + # For iSCSI + enabled_backends = kaminario-iscsi-1 + + # For FC + # enabled_backends = kaminario-fc-1 + +#. Add a back-end group section for back-end group specified + in the enabled_backends parameter + +#. In the newly created back-end group section, set the + following configuration options: + + .. code-block:: ini + + [kaminario-iscsi-1] + # Management IP of Kaminario K2 All-Flash iSCSI/FC array + san_ip = 10.0.0.10 + # Management username of Kaminario K2 All-Flash iSCSI/FC array + san_login = username + # Management password of Kaminario K2 All-Flash iSCSI/FC array + san_password = password + # Enable Kaminario K2 iSCSI/FC driver + volume_driver = cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver + # volume_driver = cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver + + # Backend name + # volume_backend_name = kaminario_fc_1 + volume_backend_name = kaminario_iscsi_1 + + # K2 driver calculates max_oversubscription_ratio on setting below + # option as True. Default value is False + # auto_calc_max_oversubscription_ratio = False + + # Set a limit on total number of volumes to be created on K2 array, for example: + # filter_function = "capabilities.total_volumes < 250" + + # For replication, replication_device must be set and the replication peer must be configured + # on the primary and the secondary K2 arrays + # Syntax: + # replication_device = backend_id:,login:,password:,rpo: + # where: + # s-array-ip is the secondary K2 array IP + # rpo must be either 60(1 min) or multiple of 300(5 min) + # Example: + # replication_device = backend_id:10.0.0.50,login:kaminario,password:kaminario,rpo:300 + + # Suppress requests library SSL certificate warnings on setting this option as True + # Default value is 'False' + # suppress_requests_ssl_warnings = False + +#. Restart the Block Storage services for the changes to take effect: + + .. code-block:: console + + # service cinder-api restart + # service cinder-scheduler restart + # service cinder-volume restart + +Setting multiple Kaminario iSCSI/FC back ends +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following steps are required to configure multiple K2 iSCSI/FC backends: + +#. In the :file:`cinder.conf` file under the [DEFAULT] section, + set the enabled_backends parameter with the comma-separated + iSCSI/FC back-end groups. + + .. code-block:: ini + + [DEFAULT] + enabled_backends = kaminario-iscsi-1, kaminario-iscsi-2, kaminario-iscsi-3 + +#. Add a back-end group section for each back-end group specified + in the enabled_backends parameter + +#. For each back-end group section, enter the configuration options as + described in the above section + ``Configure single Kaminario iSCSI/FC back end`` + + See `Configure multiple-storage back ends + `__ + for additional information. + +#. Restart the cinder volume service for the changes to take effect. + + .. code-block:: console + + # service cinder-volume restart + +Creating volume types +~~~~~~~~~~~~~~~~~~~~~ + +Create volume types for supporting volume creation on +the multiple K2 iSCSI/FC backends. +Set following extras-specs in the volume types: + +- volume_backend_name : Set value of this spec according to the + value of ``volume_backend_name`` in the back-end group sections. + If only this spec is set, then dedup Kaminario cinder volumes will be + created without replication support + + .. code-block:: console + + $ openstack volume type create kaminario_iscsi_dedup_noreplication + $ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \ + kaminario_iscsi_dedup_noreplication + +- kaminario:thin_prov_type : Set this spec in the volume type for creating + nodedup Kaminario cinder volumes. If this spec is not set, dedup Kaminario + cinder volumes will be created. + +- kaminario:replication : Set this spec in the volume type for creating + replication supported Kaminario cinder volumes. If this spec is not set, + then Kaminario cinder volumes will be created without replication support. + + .. code-block:: console + + $ openstack volume type create kaminario_iscsi_dedup_replication + $ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \ + kaminario:replication=enabled kaminario_iscsi_dedup_replication + + $ openstack volume type create kaminario_iscsi_nodedup_replication + $ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \ + kaminario:replication=enabled kaminario:thin_prov_type=nodedup \ + kaminario_iscsi_nodedup_replication + + $ openstack volume type create kaminario_iscsi_nodedup_noreplication + $ openstack volume type set --property volume_backend_name=kaminario_iscsi_1 \ + kaminario:thin_prov_type=nodedup kaminario_iscsi_nodedup_noreplication + +Supported retype cases +~~~~~~~~~~~~~~~~~~~~~~ +The following are the supported retypes for Kaminario cinder volumes: + +- Nodedup-noreplication <--> Nodedup-replication + + .. code-block:: console + + $ cinder retype volume-id new-type + +- Dedup-noreplication <--> Dedup-replication + + .. code-block:: console + + $ cinder retype volume-id new-type + +- Dedup-noreplication <--> Nodedup-noreplication + + .. code-block:: console + + $ cinder retype --migration-policy on-demand volume-id new-type + +For non-supported cases, try combinations of the +:command:`cinder retype` command. + +Driver options +~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific +to the Kaminario K2 FC and iSCSI Block Storage drivers. + +.. include:: ../../tables/cinder-kaminario.rst diff --git a/doc/source/config-reference/block-storage/drivers/lenovo-driver.rst b/doc/source/config-reference/block-storage/drivers/lenovo-driver.rst new file mode 100644 index 00000000000..8bfe3fbf98f --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/lenovo-driver.rst @@ -0,0 +1,159 @@ +====================================== +Lenovo Fibre Channel and iSCSI drivers +====================================== + +The ``LenovoFCDriver`` and ``LenovoISCSIDriver`` Cinder drivers allow +Lenovo S3200 or S2200 arrays to be used for block storage in OpenStack +deployments. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the Lenovo drivers, the following are required: + +- Lenovo S3200 or S2200 array with: + + - iSCSI or FC host interfaces + - G22x firmware or later + +- Network connectivity between the OpenStack host and the array + management interfaces + +- HTTPS or HTTP must be enabled on the array + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Migrate a volume with back-end assistance. +- Retype a volume. +- Manage and unmanage a volume. + +Configuring the array +~~~~~~~~~~~~~~~~~~~~~ + +#. Verify that the array can be managed using an HTTPS connection. HTTP can + also be used if ``lenovo_api_protocol=http`` is placed into the + appropriate sections of the ``cinder.conf`` file. + + Confirm that virtual pools A and B are present if you plan to use + virtual pools for OpenStack storage. + +#. Edit the ``cinder.conf`` file to define a storage back-end entry for + each storage pool on the array that will be managed by OpenStack. Each + entry consists of a unique section name, surrounded by square brackets, + followed by options specified in ``key=value`` format. + + - The ``lenovo_backend_name`` value specifies the name of the storage + pool on the array. + + - The ``volume_backend_name`` option value can be a unique value, if + you wish to be able to assign volumes to a specific storage pool on + the array, or a name that's shared among multiple storage pools to + let the volume scheduler choose where new volumes are allocated. + + - The rest of the options will be repeated for each storage pool in a + given array: the appropriate Cinder driver name; IP address or + host name of the array management interface; the username and password + of an array user account with ``manage`` privileges; and the iSCSI IP + addresses for the array if using the iSCSI transport protocol. + + In the examples below, two back ends are defined, one for pool A and one + for pool B, and a common ``volume_backend_name`` is used so that a + single volume type definition can be used to allocate volumes from both + pools. + + **Example: iSCSI example back-end entries** + + .. code-block:: ini + + [pool-a] + lenovo_backend_name = A + volume_backend_name = lenovo-array + volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + lenovo_iscsi_ips = 10.2.3.4,10.2.3.5 + + [pool-b] + lenovo_backend_name = B + volume_backend_name = lenovo-array + volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + lenovo_iscsi_ips = 10.2.3.4,10.2.3.5 + + **Example: Fibre Channel example back-end entries** + + .. code-block:: ini + + [pool-a] + lenovo_backend_name = A + volume_backend_name = lenovo-array + volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + + [pool-b] + lenovo_backend_name = B + volume_backend_name = lenovo-array + volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + +#. If HTTPS is not enabled in the array, include + ``lenovo_api_protocol = http`` in each of the back-end definitions. + +#. If HTTPS is enabled, you can enable certificate verification with the + option ``lenovo_verify_certificate=True``. You may also use the + ``lenovo_verify_certificate_path`` parameter to specify the path to a + CA_BUNDLE file containing CAs other than those in the default list. + +#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an + ``enabled_backends`` parameter specifying the back-end entries you added, + and a ``default_volume_type`` parameter specifying the name of a volume + type that you will create in the next step. + + **Example: [DEFAULT] section changes** + + .. code-block:: ini + + [DEFAULT] + # ... + enabled_backends = pool-a,pool-b + default_volume_type = lenovo + +#. Create a new volume type for each distinct ``volume_backend_name`` value + that you added to the ``cinder.conf`` file. The example below + assumes that the same ``volume_backend_name=lenovo-array`` + option was specified in all of the + entries, and specifies that the volume type ``lenovo`` can be used to + allocate volumes from any of them. + + **Example: Creating a volume type** + + .. code-block:: console + + $ openstack volume type create lenovo + $ openstack volume type set --property volume_backend_name=lenovo-array lenovo + +#. After modifying the ``cinder.conf`` file, + restart the ``cinder-volume`` service. + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific +to the Lenovo drivers. + +.. include:: ../../tables/cinder-lenovo.rst diff --git a/doc/source/config-reference/block-storage/drivers/lvm-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/lvm-volume-driver.rst new file mode 100644 index 00000000000..31a655f963e --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/lvm-volume-driver.rst @@ -0,0 +1,43 @@ +=== +LVM +=== + +The default volume back end uses local volumes managed by LVM. + +This driver supports different transport protocols to attach volumes, +currently iSCSI and iSER. + +Set the following in your ``cinder.conf`` configuration file, and use +the following options to configure for iSCSI transport: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver + iscsi_protocol = iscsi + +Use the following options to configure for the iSER transport: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver + iscsi_protocol = iser + +.. include:: ../../tables/cinder-lvm.rst + +.. caution:: + + When extending an existing volume which has a linked snapshot, the related + logical volume is deactivated. This logical volume is automatically + reactivated unless ``auto_activation_volume_list`` is defined in LVM + configuration file ``lvm.conf``. See the ``lvm.conf`` file for more + information. + + If auto activated volumes are restricted, then include the cinder volume + group into this list: + + .. code-block:: ini + + auto_activation_volume_list = [ "existingVG", "cinder-volumes" ] + + This note does not apply for thinly provisioned volumes + because they do not need to be deactivated. diff --git a/doc/source/config-reference/block-storage/drivers/nec-storage-m-series-driver.rst b/doc/source/config-reference/block-storage/drivers/nec-storage-m-series-driver.rst new file mode 100644 index 00000000000..d38eb2cadb3 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nec-storage-m-series-driver.rst @@ -0,0 +1,293 @@ +=========================== +NEC Storage M series driver +=========================== + +NEC Storage M series are dual-controller disk arrays which support +online maintenance. +This driver supports both iSCSI and Fibre Channel. + +System requirements +~~~~~~~~~~~~~~~~~~~ +Supported models: + +- NEC Storage M110, M310, M510 and M710 (SSD/HDD hybrid) +- NEC Storage M310F and M710F (all flash) + +Requirements: + +- Storage control software (firmware) revision 0950 or later +- NEC Storage DynamicDataReplication license +- (Optional) NEC Storage IO Load Manager license for QoS + + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Clone a volume. +- Extend a volume. +- Get volume statistics. + + +Preparation +~~~~~~~~~~~ + +Below is minimum preparation to a disk array. +For details of each command, see the NEC Storage Manager Command Reference +(IS052). + +- Common (iSCSI and Fibre Channel) + + #. Initial setup + + * Set IP addresses for management and BMC with the network configuration + tool. + * Enter license keys. (iSMcfg licenserelease) + #. Create pools + + * Create pools for volumes. (iSMcfg poolbind) + * Create pools for snapshots. (iSMcfg poolbind) + #. Create system volumes + + * Create a Replication Reserved Volume (RSV) in one of pools. + (iSMcfg ldbind) + * Create Snapshot Reserve Areas (SRAs) in each snapshot pool. + (iSMcfg srabind) + #. (Optional) Register SSH public key + + +- iSCSI only + + #. Set IP addresses of each iSCSI port. (iSMcfg setiscsiport) + #. Create a LD Set with setting multi-target mode on. (iSMcfg addldset) + #. Register initiator names of each node. (iSMcfg addldsetinitiator) + + +- Fibre Channel only + + #. Start access control. (iSMcfg startacc) + #. Create a LD Set. (iSMcfg addldset) + #. Register WWPNs of each node. (iSMcfg addldsetpath) + + +Configuration +~~~~~~~~~~~~~ + + +Set the following in your ``cinder.conf``, and use the following options +to configure it. + +If you use Fibre Channel: + +.. code-block:: ini + + [Storage1] + volume_driver = cinder.volume.drivers.nec.volume.MStorageFCDriver + +.. end + + +If you use iSCSI: + +.. code-block:: ini + + [Storage1] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + +.. end + +Also, set ``volume_backend_name``. + +.. code-block:: ini + + [DEFAULT] + volume_backend_name = Storage1 + +.. end + + +This table shows configuration options for NEC Storage M series driver. + +.. include:: ../../tables/cinder-nec_m.rst + + + +Required options +---------------- + + +- ``nec_ismcli_fip`` + FIP address of M-Series Storage. + +- ``nec_ismcli_user`` + User name for M-Series Storage iSMCLI. + +- ``nec_ismcli_password`` + Password for M-Series Storage iSMCLI. + +- ``nec_ismcli_privkey`` + RSA secret key file name for iSMCLI (for public key authentication only). + Encrypted RSA secret key file cannot be specified. + +- ``nec_diskarray_name`` + Diskarray name of M-Series Storage. + This parameter must be specified to configure multiple groups + (multi back end) by using the same storage device (storage + device that has the same ``nec_ismcli_fip``). Specify the disk + array name targeted by the relevant config-group for this + parameter. + +- ``nec_backup_pools`` + Specify a pool number where snapshots are created. + + +Timeout configuration +--------------------- + + +- ``rpc_response_timeout`` + Set the timeout value in seconds. If three or more volumes can be created + at the same time, the reference value is 30 seconds multiplied by the + number of volumes created at the same time. + Also, Specify nova parameters below in ``nova.conf`` file. + + .. code-block:: ini + + [DEFAULT] + block_device_allocate_retries = 120 + block_device_allocate_retries_interval = 10 + + .. end + + +- ``timeout server (HAProxy configuration)`` + In addition, you need to edit the following value in the HAProxy + configuration file (``/etc/haproxy/haproxy.cfg``) in an environment where + HAProxy is used. + + .. code-block:: ini + + timeout server = 600 #Specify a value greater than rpc_response_timeout. + + .. end + + Run the :command:`service haproxy reload` command after editing the + value to reload the HAProxy settings. + + .. note:: + + The OpenStack environment set up using Red Hat OpenStack Platform + Director may be set to use HAProxy. + + +Configuration example for /etc/cinder/cinder.conf +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When using one config-group +--------------------------- + +- When using ``nec_ismcli_password`` to authenticate iSMCLI + (Password authentication): + + .. code-block:: ini + + [DEFAULT] + enabled_backends = Storage1 + + [Storage1] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Storage1 + nec_ismcli_fip = 192.168.1.10 + nec_ismcli_user = sysadmin + nec_ismcli_password = sys123 + nec_pools = 0 + nec_backup_pools = 1 + + .. end + + +- When using ``nec_ismcli_privkey`` to authenticate iSMCLI + (Public key authentication): + + .. code-block:: ini + + [DEFAULT] + enabled_backends = Storage1 + + [Storage1] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Storage1 + nec_ismcli_fip = 192.168.1.10 + nec_ismcli_user = sysadmin + nec_ismcli_privkey = /etc/cinder/id_rsa + nec_pools = 0 + nec_backup_pools = 1 + + .. end + + +When using multi config-group (multi-backend) +--------------------------------------------- + +- Four config-groups (backends) + + Storage1, Storage2, Storage3, Storage4 + +- Two disk arrays + + 200000255C3A21CC(192.168.1.10) + Example for using config-group, Storage1 and Storage2 + + 2000000991000316(192.168.1.20) + Example for using config-group, Storage3 and Storage4 + + .. code-block:: ini + + [DEFAULT] + enabled_backends = Storage1,Storage2,Storage3,Storage4 + + [Storage1] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Gold + nec_ismcli_fip = 192.168.1.10 + nec_ismcli_user = sysadmin + nec_ismcli_password = sys123 + nec_pools = 0 + nec_backup_pools = 2 + nec_diskarray_name = 200000255C3A21CC + + [Storage2] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Silver + nec_ismcli_fip = 192.168.1.10 + nec_ismcli_user = sysadmin + nec_ismcli_password = sys123 + nec_pools = 1 + nec_backup_pools = 3 + nec_diskarray_name = 200000255C3A21CC + + [Storage3] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Gold + nec_ismcli_fip = 192.168.1.20 + nec_ismcli_user = sysadmin + nec_ismcli_password = sys123 + nec_pools = 0 + nec_backup_pools = 2 + nec_diskarray_name = 2000000991000316 + + [Storage4] + volume_driver = cinder.volume.drivers.nec.volume.MStorageISCSIDriver + volume_backend_name = Silver + nec_ismcli_fip = 192.168.1.20 + nec_ismcli_user = sysadmin + nec_ismcli_password = sys123 + nec_pools = 1 + nec_backup_pools = 3 + nec_diskarray_name = 2000000991000316 + + .. end diff --git a/doc/source/config-reference/block-storage/drivers/netapp-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/netapp-volume-driver.rst new file mode 100644 index 00000000000..8c9313783dd --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/netapp-volume-driver.rst @@ -0,0 +1,592 @@ +===================== +NetApp unified driver +===================== + +The NetApp unified driver is a Block Storage driver that supports +multiple storage families and protocols. A storage family corresponds to +storage systems built on different NetApp technologies such as clustered +Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage +protocol refers to the protocol used to initiate data storage and access +operations on those storage systems like iSCSI and NFS. The NetApp +unified driver can be configured to provision and manage OpenStack +volumes on a given storage family using a specified storage protocol. +Also, the NetApp unified driver supports over subscription or over +provisioning when thin provisioned Block Storage volumes are in use +on an E-Series backend. The OpenStack volumes can then be used for +accessing and storing data using the storage protocol on the storage +family system. The NetApp unified driver is an extensible interface +that can support new storage families and protocols. + +.. important:: + + The NetApp unified driver in cinder currently provides integration for + two major generations of the ONTAP operating system: the current + clustered ONTAP and the legacy 7-mode. NetApp’s full support for + 7-mode ended in August of 2015 and the current limited support period + will end in February of 2017. + + The 7-mode components of the cinder NetApp unified driver have now been + marked deprecated and will be removed in the Queens release. This will + apply to all three protocols currently supported in this driver: iSCSI, + FC and NFS. + +.. note:: + + With the Juno release of OpenStack, Block Storage has + introduced the concept of storage pools, in which a single + Block Storage back end may present one or more logical + storage resource pools from which Block Storage will + select a storage location when provisioning volumes. + + In releases prior to Juno, the NetApp unified driver contained some + scheduling logic that determined which NetApp storage container + (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for + E-Series) that a new Block Storage volume would be placed into. + + With the introduction of pools, all scheduling logic is performed + completely within the Block Storage scheduler, as each + NetApp storage container is directly exposed to the Block + Storage scheduler as a storage pool. Previously, the NetApp + unified driver presented an aggregated view to the scheduler and + made a final placement decision as to which NetApp storage container + the Block Storage volume would be provisioned into. + +NetApp clustered Data ONTAP storage family +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The NetApp clustered Data ONTAP storage family represents a +configuration group which provides Compute instances access to +clustered Data ONTAP storage systems. At present it can be configured in +Block Storage to work with iSCSI and NFS storage protocols. + +NetApp iSCSI configuration for clustered Data ONTAP +--------------------------------------------------- + +The NetApp iSCSI configuration for clustered Data ONTAP is an interface +from OpenStack to clustered Data ONTAP storage systems. It provisions +and manages the SAN block storage entity, which is a NetApp LUN that +can be accessed using the iSCSI protocol. + +The iSCSI configuration for clustered Data ONTAP is a direct interface +from Block Storage to the clustered Data ONTAP instance and as +such does not require additional management software to achieve the +desired functionality. It uses NetApp APIs to interact with the +clustered Data ONTAP instance. + +**Configuration options** + +Configure the volume driver, storage family, and storage protocol to the +NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by +setting the ``volume_driver``, ``netapp_storage_family`` and +``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_cluster + netapp_storage_protocol = iscsi + netapp_vserver = openstack-vserver + netapp_server_hostname = myhostname + netapp_server_port = port + netapp_login = username + netapp_password = password + +.. note:: + + To use the iSCSI protocol, you must override the default value of + ``netapp_storage_protocol`` with ``iscsi``. + +.. include:: ../../tables/cinder-netapp_cdot_iscsi.rst + +.. note:: + + If you specify an account in the ``netapp_login`` that only has + virtual storage server (Vserver) administration privileges (rather + than cluster-wide administration privileges), some advanced features + of the NetApp unified driver will not work and you may see warnings + in the Block Storage logs. + +.. note:: + + The driver supports iSCSI CHAP uni-directional authentication. + To enable it, set the ``use_chap_auth`` option to ``True``. + +.. tip:: + + For more information on these options and other deployment and + operational scenarios, visit the `NetApp OpenStack Deployment and + Operations + Guide `__. + +NetApp NFS configuration for clustered Data ONTAP +------------------------------------------------- + +The NetApp NFS configuration for clustered Data ONTAP is an interface from +OpenStack to a clustered Data ONTAP system for provisioning and managing +OpenStack volumes on NFS exports provided by the clustered Data ONTAP system +that are accessed using the NFS protocol. + +The NFS configuration for clustered Data ONTAP is a direct interface from +Block Storage to the clustered Data ONTAP instance and as such does +not require any additional management software to achieve the desired +functionality. It uses NetApp APIs to interact with the clustered Data ONTAP +instance. + +**Configuration options** + +Configure the volume driver, storage family, and storage protocol to NetApp +unified driver, clustered Data ONTAP, and NFS respectively by setting the +``volume_driver``, ``netapp_storage_family``, and ``netapp_storage_protocol`` +options in the ``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_cluster + netapp_storage_protocol = nfs + netapp_vserver = openstack-vserver + netapp_server_hostname = myhostname + netapp_server_port = port + netapp_login = username + netapp_password = password + nfs_shares_config = /etc/cinder/nfs_shares + +.. include:: ../../tables/cinder-netapp_cdot_nfs.rst + +.. note:: + + Additional NetApp NFS configuration options are shared with the + generic NFS driver. These options can be found here: + :ref:`cinder-storage_nfs`. + +.. note:: + + If you specify an account in the ``netapp_login`` that only has + virtual storage server (Vserver) administration privileges (rather + than cluster-wide administration privileges), some advanced features + of the NetApp unified driver will not work and you may see warnings + in the Block Storage logs. + +NetApp NFS Copy Offload client +------------------------------ + +A feature was added in the Icehouse release of the NetApp unified driver that +enables Image service images to be efficiently copied to a destination Block +Storage volume. When the Block Storage and Image service are configured to use +the NetApp NFS Copy Offload client, a controller-side copy will be attempted +before reverting to downloading the image from the Image service. This improves +image provisioning times while reducing the consumption of bandwidth and CPU +cycles on the host(s) running the Image and Block Storage services. This is due +to the copy operation being performed completely within the storage cluster. + +The NetApp NFS Copy Offload client can be used in either of the following +scenarios: + +- The Image service is configured to store images in an NFS share that is + exported from a NetApp FlexVol volume *and* the destination for the new Block + Storage volume will be on an NFS share exported from a different FlexVol + volume than the one used by the Image service. Both FlexVols must be located + within the same cluster. + +- The source image from the Image service has already been cached in an NFS + image cache within a Block Storage back end. The cached image resides on a + different FlexVol volume than the destination for the new Block Storage + volume. Both FlexVols must be located within the same cluster. + +To use this feature, you must configure the Image service, as follows: + +- Set the ``default_store`` configuration option to ``file``. + +- Set the ``filesystem_store_datadir`` configuration option to the path + to the Image service NFS export. + +- Set the ``show_image_direct_url`` configuration option to ``True``. + +- Set the ``show_multiple_locations`` configuration option to ``True``. + +- Set the ``filesystem_store_metadata_file`` configuration option to a metadata + file. The metadata file should contain a JSON object that contains the + correct information about the NFS export used by the Image service. + +To use this feature, you must configure the Block Storage service, as follows: + +- Set the ``netapp_copyoffload_tool_path`` configuration option to the path to + the NetApp Copy Offload binary. + +- Set the ``glance_api_version`` configuration option to ``2``. + + .. important:: + + This feature requires that: + + - The storage system must have Data ONTAP v8.2 or greater installed. + + - The vStorage feature must be enabled on each storage virtual machine + (SVM, also known as a Vserver) that is permitted to interact with the + copy offload client. + + - To configure the copy offload workflow, enable NFS v4.0 or greater and + export it from the SVM. + +.. tip:: + + To download the NetApp copy offload binary to be utilized in conjunction + with the ``netapp_copyoffload_tool_path`` configuration option, please visit + the Utility Toolchest page at the `NetApp Support portal + `__ + (login is required). + +.. tip:: + + For more information on these options and other deployment and operational + scenarios, visit the `NetApp OpenStack Deployment and Operations Guide + `__. + +NetApp-supported extra specs for clustered Data ONTAP +----------------------------------------------------- + +Extra specs enable vendors to specify extra filter criteria. +The Block Storage scheduler uses the specs when the scheduler determines +which volume node should fulfill a volume provisioning request. +When you use the NetApp unified driver with a clustered Data ONTAP +storage system, you can leverage extra specs with Block Storage +volume types to ensure that Block Storage volumes are created +on storage back ends that have certain properties. +An example of this is when you configure QoS, mirroring, +or compression for a storage back end. + +Extra specs are associated with Block Storage volume types. +When users request volumes of a particular volume type, the volumes +are created on storage back ends that meet the list of requirements. +An example of this is the back ends that have the available space or +extra specs. Use the specs in the following table to configure volumes. +Define Block Storage volume types by using the :command:`openstack volume +type set` command. + +.. include:: ../../tables/manual/cinder-netapp_cdot_extraspecs.rst + + +NetApp Data ONTAP operating in 7-Mode storage family +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The NetApp Data ONTAP operating in 7-Mode storage family represents a +configuration group which provides Compute instances access to 7-Mode +storage systems. At present it can be configured in Block Storage to +work with iSCSI and NFS storage protocols. + +NetApp iSCSI configuration for Data ONTAP operating in 7-Mode +------------------------------------------------------------- + +The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an +interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for +provisioning and managing the SAN block storage entity, that is, a LUN which +can be accessed using iSCSI protocol. + +The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct +interface from OpenStack to Data ONTAP operating in 7-Mode storage system and +it does not require additional management software to achieve the desired +functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating +in 7-Mode storage system. + +**Configuration options** + +Configure the volume driver, storage family and storage protocol to the NetApp +unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by +setting the ``volume_driver``, ``netapp_storage_family`` and +``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_7mode + netapp_storage_protocol = iscsi + netapp_server_hostname = myhostname + netapp_server_port = 80 + netapp_login = username + netapp_password = password + +.. note:: + + To use the iSCSI protocol, you must override the default value of + ``netapp_storage_protocol`` with ``iscsi``. + +.. include:: ../../tables/cinder-netapp_7mode_iscsi.rst + +.. note:: + + The driver supports iSCSI CHAP uni-directional authentication. + To enable it, set the ``use_chap_auth`` option to ``True``. + +.. tip:: + + For more information on these options and other deployment and + operational scenarios, visit the `NetApp OpenStack Deployment and + Operations + Guide `__. + +NetApp NFS configuration for Data ONTAP operating in 7-Mode +----------------------------------------------------------- + +The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface +from OpenStack to Data ONTAP operating in 7-Mode storage system for +provisioning and managing OpenStack volumes on NFS exports provided by the Data +ONTAP operating in 7-Mode storage system which can then be accessed using NFS +protocol. + +The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface +from Block Storage to the Data ONTAP operating in 7-Mode instance and +as such does not require any additional management software to achieve the +desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP +operating in 7-Mode storage system. + + +.. important:: + Support for 7-mode configuration has been deprecated in the Ocata release + and will be removed in the Queens release of OpenStack. + +**Configuration options** + +Configure the volume driver, storage family, and storage protocol to the NetApp +unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting +the ``volume_driver``, ``netapp_storage_family`` and +``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_7mode + netapp_storage_protocol = nfs + netapp_server_hostname = myhostname + netapp_server_port = 80 + netapp_login = username + netapp_password = password + nfs_shares_config = /etc/cinder/nfs_shares + +.. include:: ../../tables/cinder-netapp_7mode_nfs.rst + +.. note:: + + Additional NetApp NFS configuration options are shared with the + generic NFS driver. For a description of these, see + :ref:`cinder-storage_nfs`. + +.. tip:: + + For more information on these options and other deployment and + operational scenarios, visit the `NetApp OpenStack Deployment and + Operations + Guide `__. + +NetApp E-Series storage family +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The NetApp E-Series storage family represents a configuration group which +provides OpenStack compute instances access to E-Series storage systems. At +present it can be configured in Block Storage to work with the iSCSI +storage protocol. + +NetApp iSCSI configuration for E-Series +--------------------------------------- + +The NetApp iSCSI configuration for E-Series is an interface from OpenStack to +E-Series storage systems. It provisions and manages the SAN block storage +entity, which is a NetApp LUN which can be accessed using the iSCSI protocol. + +The iSCSI configuration for E-Series is an interface from Block +Storage to the E-Series proxy instance and as such requires the deployment of +the proxy instance in order to achieve the desired functionality. The driver +uses REST APIs to interact with the E-Series proxy instance, which in turn +interacts directly with the E-Series controllers. + +The use of multipath and DM-MP are required when using the Block +Storage driver for E-Series. In order for Block Storage and OpenStack +Compute to take advantage of multiple paths, the following configuration +options must be correctly configured: + +- The ``use_multipath_for_image_xfer`` option should be set to ``True`` in the + ``cinder.conf`` file within the driver-specific stanza (for example, + ``[myDriver]``). + +- The ``iscsi_use_multipath`` option should be set to ``True`` in the + ``nova.conf`` file within the ``[libvirt]`` stanza. + +**Configuration options** + +Configure the volume driver, storage family, and storage protocol to the +NetApp unified driver, E-Series, and iSCSI respectively by setting the +``volume_driver``, ``netapp_storage_family`` and +``netapp_storage_protocol`` options in the ``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = eseries + netapp_storage_protocol = iscsi + netapp_server_hostname = myhostname + netapp_server_port = 80 + netapp_login = username + netapp_password = password + netapp_controller_ips = 1.2.3.4,5.6.7.8 + netapp_sa_password = arrayPassword + netapp_storage_pools = pool1,pool2 + use_multipath_for_image_xfer = True + +.. note:: + + To use the E-Series driver, you must override the default value of + ``netapp_storage_family`` with ``eseries``. + + To use the iSCSI protocol, you must override the default value of + ``netapp_storage_protocol`` with ``iscsi``. + +.. include:: ../../tables/cinder-netapp_eseries_iscsi.rst + +.. tip:: + + For more information on these options and other deployment and + operational scenarios, visit the `NetApp OpenStack Deployment and + Operations + Guide `__. + +NetApp-supported extra specs for E-Series +----------------------------------------- + +Extra specs enable vendors to specify extra filter criteria. +The Block Storage scheduler uses the specs when the scheduler determines +which volume node should fulfill a volume provisioning request. +When you use the NetApp unified driver with an E-Series storage system, +you can leverage extra specs with Block Storage volume types to ensure +that Block Storage volumes are created on storage back ends that have +certain properties. An example of this is when you configure thin +provisioning for a storage back end. + +Extra specs are associated with Block Storage volume types. +When users request volumes of a particular volume type, the volumes are +created on storage back ends that meet the list of requirements. +An example of this is the back ends that have the available space or +extra specs. Use the specs in the following table to configure volumes. +Define Block Storage volume types by using the :command:`openstack volume +type set` command. + +.. list-table:: Description of extra specs options for NetApp Unified Driver with E-Series + :header-rows: 1 + + * - Extra spec + - Type + - Description + * - ``netapp_thin_provisioned`` + - Boolean + - Limit the candidate volume list to only the ones that support thin + provisioning on the storage controller. + +Upgrading prior NetApp drivers to the NetApp unified driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NetApp introduced a new unified block storage driver in Havana for configuring +different storage families and storage protocols. This requires defining an +upgrade path for NetApp drivers which existed in releases prior to Havana. This +section covers the upgrade configuration for NetApp drivers to the new unified +configuration and a list of deprecated NetApp drivers. + +Upgraded NetApp drivers +----------------------- + +This section describes how to update Block Storage configuration from +a pre-Havana release to the unified driver format. + +- NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier): + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver + + NetApp unified driver configuration: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_cluster + netapp_storage_protocol = iscsi + +- NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or + earlier): + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver + + NetApp unified driver configuration: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_cluster + netapp_storage_protocol = nfs + +- NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage + controller in Grizzly (or earlier): + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver + + NetApp unified driver configuration: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_7mode + netapp_storage_protocol = iscsi + +- NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage + controller in Grizzly (or earlier): + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver + + NetApp unified driver configuration: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver + netapp_storage_family = ontap_7mode + netapp_storage_protocol = nfs + +Deprecated NetApp drivers +------------------------- + +This section lists the NetApp drivers in earlier releases that are +deprecated in Havana. + +- NetApp iSCSI driver for clustered Data ONTAP: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver + +- NetApp NFS driver for clustered Data ONTAP: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver + +- NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage + controller: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver + +- NetApp NFS driver for Data ONTAP operating in 7-Mode storage + controller: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver + +.. note:: + + For support information on deprecated NetApp drivers in the Havana + release, visit the `NetApp OpenStack Deployment and Operations + Guide `__. diff --git a/doc/source/config-reference/block-storage/drivers/nexentaedge-driver.rst b/doc/source/config-reference/block-storage/drivers/nexentaedge-driver.rst new file mode 100644 index 00000000000..8bd72217d09 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nexentaedge-driver.rst @@ -0,0 +1,159 @@ +=============================== +NexentaEdge NBD & iSCSI drivers +=============================== + +NexentaEdge is designed from the ground-up to deliver high performance Block +and Object storage services and limitless scalability to next generation +OpenStack clouds, petabyte scale active archives and Big Data applications. +NexentaEdge runs on shared nothing clusters of industry standard Linux +servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW) +technology to break new ground in terms of reliability, functionality and cost +efficiency. + +For user documentation, see the +`Nexenta Documentation Center `_. + + +iSCSI driver +~~~~~~~~~~~~ + +The NexentaEdge cluster must be installed and configured according to the +relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created, +as well as an iSCSI service on the NexentaEdge gateway node. + +The NexentaEdge iSCSI driver is selected using the normal procedures for one +or multiple back-end volume drivers. + +You must configure these items for each NexentaEdge cluster that the iSCSI +volume driver controls: + +#. Make the following changes on the storage node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta iSCSI driver + volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver + + # Specify the ip address for Rest API (string value) + nexenta_rest_address = MANAGEMENT-NODE-IP + + # Port for Rest API (integer value) + nexenta_rest_port=8080 + + # Protocol used for Rest calls (string value, default=htpp) + nexenta_rest_protocol = http + + # Username for NexentaEdge Rest (string value) + nexenta_user=USERNAME + + # Password for NexentaEdge Rest (string value) + nexenta_password=PASSWORD + + # Path to bucket containing iSCSI LUNs (string value) + nexenta_lun_container = CLUSTER/TENANT/BUCKET + + # Name of pre-created iSCSI service (string value) + nexenta_iscsi_service = SERVICE-NAME + + # IP address of the gateway node attached to iSCSI service above or + # virtual IP address if an iSCSI Storage Service Group is configured in + # HA mode (string value) + nexenta_client_address = GATEWAY-NODE-IP + + +#. Save the changes to the ``/etc/cinder/cinder.conf`` file and + restart the ``cinder-volume`` service. + +Supported operations +-------------------- + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + + +NBD driver +~~~~~~~~~~ + +As an alternative to using iSCSI, Amazon S3, or OpenStack Swift protocols, +NexentaEdge can provide access to cluster storage via a Network Block Device +(NBD) interface. + +The NexentaEdge cluster must be installed and configured according to the +relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created. +The driver requires NexentaEdge Service to run on Hypervisor (Nova) node. +The node must sit on Replicast Network and only runs NexentaEdge service, does +not require physical disks. + +You must configure these items for each NexentaEdge cluster that the NBD +volume driver controls: + +#. Make the following changes on storage node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta NBD driver + volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver + + # Specify the ip address for Rest API (string value) + nexenta_rest_address = MANAGEMENT-NODE-IP + + # Port for Rest API (integer value) + nexenta_rest_port = 8080 + + # Protocol used for Rest calls (string value, default=htpp) + nexenta_rest_protocol = http + + # Username for NexentaEdge Rest (string value) + nexenta_rest_user = USERNAME + + # Password for NexentaEdge Rest (string value) + nexenta_rest_password = PASSWORD + + # Path to bucket containing iSCSI LUNs (string value) + nexenta_lun_container = CLUSTER/TENANT/BUCKET + + # Path to directory to store symbolic links to block devices + # (string value, default=/dev/disk/by-path) + nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS + + +#. Save the changes to the ``/etc/cinder/cinder.conf`` file and + restart the ``cinder-volume`` service. + +Supported operations +-------------------- + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + + +Driver options +~~~~~~~~~~~~~~ + +Nexenta Driver supports these options: + +.. include:: ../../tables/cinder-nexenta_edge.rst diff --git a/doc/source/config-reference/block-storage/drivers/nexentastor4-driver.rst b/doc/source/config-reference/block-storage/drivers/nexentastor4-driver.rst new file mode 100644 index 00000000000..ccd7cf5e234 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nexentastor4-driver.rst @@ -0,0 +1,141 @@ +===================================== +NexentaStor 4.x NFS and iSCSI drivers +===================================== + +NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) +platform delivering unified file (NFS and SMB) and block (FC and iSCSI) +storage services, runs on industry standard hardware, scales from tens of +terabytes to petabyte configurations, and includes all data management +functionality by default. + +For NexentaStor 4.x user documentation, visit +https://nexenta.com/products/downloads/nexentastor. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +* Migrate a volume. + +* Change volume type. + +Nexenta iSCSI driver +~~~~~~~~~~~~~~~~~~~~ + +The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store +Compute volumes. Every Compute volume is represented by a single zvol in a +predefined Nexenta namespace. The Nexenta iSCSI volume driver should work with +all versions of NexentaStor. + +The NexentaStor appliance must be installed and configured according to the +relevant Nexenta documentation. A volume and an enclosing namespace must be +created for all iSCSI volumes to be accessed through the volume driver. This +should be done as specified in the release-specific NexentaStor documentation. + +The NexentaStor Appliance iSCSI driver is selected using the normal procedures +for one or multiple backend volume drivers. + +You must configure these items for each NexentaStor appliance that the iSCSI +volume driver controls: + +#. Make the following changes on the volume node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta iSCSI driver + volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver + + # IP address of NexentaStor host (string value) + nexenta_host=HOST-IP + + # Username for NexentaStor REST (string value) + nexenta_user=USERNAME + + # Port for Rest API (integer value) + nexenta_rest_port=8457 + + # Password for NexentaStor REST (string value) + nexenta_password=PASSWORD + + # Volume on NexentaStor appliance (string value) + nexenta_volume=volume_name + + +.. note:: + + nexenta_volume represents a zpool which is called volume on NS appliance. It must be pre-created before enabling the driver. + + +#. Save the changes to the ``/etc/cinder/cinder.conf`` file and + restart the ``cinder-volume`` service. + + + +Nexenta NFS driver +~~~~~~~~~~~~~~~~~~ +The Nexenta NFS driver allows you to use NexentaStor appliance to store +Compute volumes via NFS. Every Compute volume is represented by a single +NFS file within a shared directory. + +While the NFS protocols standardize file access for users, they do not +standardize administrative actions such as taking snapshots or replicating +file systems. The OpenStack Volume Drivers bring a common interface to these +operations. The Nexenta NFS driver implements these standard actions using +the ZFS management plane that is already deployed on NexentaStor appliances. + +The Nexenta NFS volume driver should work with all versions of NexentaStor. +The NexentaStor appliance must be installed and configured according to the +relevant Nexenta documentation. A single-parent file system must be created +for all virtual disk directories supported for OpenStack. This directory must +be created and exported on each NexentaStor appliance. This should be done as +specified in the release- specific NexentaStor documentation. + +You must configure these items for each NexentaStor appliance that the NFS +volume driver controls: + +#. Make the following changes on the volume node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta NFS driver + volume_driver=cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver + + # Path to shares config file + nexenta_shares_config=/home/ubuntu/shares.cfg + + .. note:: + + Add your list of Nexenta NFS servers to the file you specified with the + ``nexenta_shares_config`` option. For example, this is how this file should look: + + .. code-block:: bash + + 192.168.1.200:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.200:8457 + 192.168.1.201:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.201:8457 + 192.168.1.202:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.202:8457 + +Each line in this file represents an NFS share. The first part of the line is +the NFS share URL, the second line is the connection URL to the NexentaStor +Appliance. + +Driver options +~~~~~~~~~~~~~~ + +Nexenta Driver supports these options: + +.. include:: ../../tables/cinder-nexenta.rst diff --git a/doc/source/config-reference/block-storage/drivers/nexentastor5-driver.rst b/doc/source/config-reference/block-storage/drivers/nexentastor5-driver.rst new file mode 100644 index 00000000000..30802aab86c --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nexentastor5-driver.rst @@ -0,0 +1,153 @@ +===================================== +NexentaStor 5.x NFS and iSCSI drivers +===================================== + +NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) +platform delivering unified file (NFS and SMB) and block (FC and iSCSI) +storage services. NexentaStor runs on industry standard hardware, scales from +tens of terabytes to petabyte configurations, and includes all data management +functionality by default. + +For user documentation, see the +`Nexenta Documentation Center `__. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +* Migrate a volume. + +* Change volume type. + +iSCSI driver +~~~~~~~~~~~~ + +The NexentaStor appliance must be installed and configured according to the +relevant Nexenta documentation. A pool and an enclosing namespace must be +created for all iSCSI volumes to be accessed through the volume driver. This +should be done as specified in the release-specific NexentaStor documentation. + +The NexentaStor Appliance iSCSI driver is selected using the normal procedures +for one or multiple back-end volume drivers. + + +You must configure these items for each NexentaStor appliance that the iSCSI +volume driver controls: + +#. Make the following changes on the volume node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta iSCSI driver + volume_driver=cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver + + # IP address of NexentaStor host (string value) + nexenta_host=HOST-IP + + # Port for Rest API (integer value) + nexenta_rest_port=8080 + + # Username for NexentaStor Rest (string value) + nexenta_user=USERNAME + + # Password for NexentaStor Rest (string value) + nexenta_password=PASSWORD + + # Pool on NexentaStor appliance (string value) + nexenta_volume=volume_name + + # Name of a parent Volume group where cinder created zvols will reside (string value) + nexenta_volume_group = iscsi + + .. note:: + + nexenta_volume represents a zpool, which is called pool on NS 5.x appliance. + It must be pre-created before enabling the driver. + + Volume group does not need to be pre-created, the driver will create it if does not exist. + +#. Save the changes to the ``/etc/cinder/cinder.conf`` file and + restart the ``cinder-volume`` service. + +NFS driver +~~~~~~~~~~ +The Nexenta NFS driver allows you to use NexentaStor appliance to store +Compute volumes via NFS. Every Compute volume is represented by a single +NFS file within a shared directory. + +While the NFS protocols standardize file access for users, they do not +standardize administrative actions such as taking snapshots or replicating +file systems. The OpenStack Volume Drivers bring a common interface to these +operations. The Nexenta NFS driver implements these standard actions using the +ZFS management plane that already is deployed on NexentaStor appliances. + +The NexentaStor appliance must be installed and configured according to the +relevant Nexenta documentation. A single-parent file system must be created +for all virtual disk directories supported for OpenStack. +Create and export the directory on each NexentaStor appliance. + +You must configure these items for each NexentaStor appliance that the NFS +volume driver controls: + +#. Make the following changes on the volume node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # Enable Nexenta NFS driver + volume_driver=cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver + + # IP address or Hostname of NexentaStor host (string value) + nas_host=HOST-IP + + # Port for Rest API (integer value) + nexenta_rest_port=8080 + + # Path to parent filesystem (string value) + nas_share_path=POOL/FILESYSTEM + + # Specify NFS version + nas_mount_options=vers=4 + +#. Create filesystem on appliance and share via NFS. For example: + + .. code-block:: vim + + "securityContexts": [ + {"readWriteList": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}], + "root": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}], + "securityModes": ["sys"]}] + +#. Create ACL for the filesystem. For example: + + .. code-block:: json + + {"type": "allow", + "principal": "everyone@", + "permissions": ["list_directory","read_data","add_file","write_data", + "add_subdirectory","append_data","read_xattr","write_xattr","execute", + "delete_child","read_attributes","write_attributes","delete","read_acl", + "write_acl","write_owner","synchronize"], + "flags": ["file_inherit","dir_inherit"]} + + +Driver options +~~~~~~~~~~~~~~ + +Nexenta Driver supports these options: + +.. include:: ../../tables/cinder-nexenta5.rst diff --git a/doc/source/config-reference/block-storage/drivers/nfs-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/nfs-volume-driver.rst new file mode 100644 index 00000000000..4d99eb842ef --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nfs-volume-driver.rst @@ -0,0 +1,157 @@ +========== +NFS driver +========== + +The Network File System (NFS) is a distributed file system protocol +originally developed by Sun Microsystems in 1984. An NFS server +``exports`` one or more of its file systems, known as ``shares``. +An NFS client can mount these exported shares on its own file system. +You can perform file actions on this mounted remote file system as +if the file system were local. + +How the NFS driver works +~~~~~~~~~~~~~~~~~~~~~~~~ + +The NFS driver, and other drivers based on it, work quite differently +than a traditional block storage driver. + +The NFS driver does not actually allow an instance to access a storage +device at the block level. Instead, files are created on an NFS share +and mapped to instances, which emulates a block device. +This works in a similar way to QEMU, which stores instances in the +``/var/lib/nova/instances`` directory. + +Enable the NFS driver and related options +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To use Cinder with the NFS driver, first set the ``volume_driver`` +in the ``cinder.conf`` configuration file: + +.. code-block:: ini + + volume_driver=cinder.volume.drivers.nfs.NfsDriver + +The following table contains the options supported by the NFS driver. + +.. include:: ../../tables/cinder-storage_nfs.rst + +.. note:: + + As of the Icehouse release, the NFS driver (and other drivers based + off it) will attempt to mount shares using version 4.1 of the NFS + protocol (including pNFS). If the mount attempt is unsuccessful due + to a lack of client or server support, a subsequent mount attempt + that requests the default behavior of the :command:`mount.nfs` command + will be performed. On most distributions, the default behavior is to + attempt mounting first with NFS v4.0, then silently fall back to NFS + v3.0 if necessary. If the ``nfs_mount_options`` configuration option + contains a request for a specific version of NFS to be used, or if + specific options are specified in the shares configuration file + specified by the ``nfs_shares_config`` configuration option, the + mount will be attempted as requested with no subsequent attempts. + +How to use the NFS driver +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Creating an NFS server is outside the scope of this document. + +Configure with one NFS server +----------------------------- + +This example assumes access to the following NFS server and mount point: + +* 192.168.1.200:/storage + +This example demonstrates the usage of this driver with one NFS server. + +Set the ``nas_host`` option to the IP address or host name of your NFS +server, and the ``nas_share_path`` option to the NFS export path: + +.. code-block:: ini + + nas_host = 192.168.1.200 + nas_share_path = /storage + +Configure with multiple NFS servers +----------------------------------- + +.. note:: + + You can use the multiple NFS servers with `cinder multi back ends + `_ feature. + Configure the :ref:`enabled_backends ` option with + multiple values, and use the ``nas_host`` and ``nas_share`` options + for each back end as described above. + +The below example is another method to use multiple NFS servers, +and demonstrates the usage of this driver with multiple NFS servers. +Multiple servers are not required. One is usually enough. + +This example assumes access to the following NFS servers and mount points: + +* 192.168.1.200:/storage +* 192.168.1.201:/storage +* 192.168.1.202:/storage + +#. Add your list of NFS servers to the file you specified with the + ``nfs_shares_config`` option. For example, if the value of this option + was set to ``/etc/cinder/shares.txt`` file, then: + + .. code-block:: console + + # cat /etc/cinder/shares.txt + 192.168.1.200:/storage + 192.168.1.201:/storage + 192.168.1.202:/storage + + Comments are allowed in this file. They begin with a ``#``. + +#. Configure the ``nfs_mount_point_base`` option. This is a directory + where ``cinder-volume`` mounts all NFS shares stored in the ``shares.txt`` + file. For this example, ``/var/lib/cinder/nfs`` is used. You can, + of course, use the default value of ``$state_path/mnt``. + +#. Start the ``cinder-volume`` service. ``/var/lib/cinder/nfs`` should + now contain a directory for each NFS share specified in the ``shares.txt`` + file. The name of each directory is a hashed name: + + .. code-block:: console + + # ls /var/lib/cinder/nfs/ + ... + 46c5db75dc3a3a50a10bfd1a456a9f3f + ... + +#. You can now create volumes as you normally would: + + .. code-block:: console + + $ openstack volume create --size 5 MYVOLUME + # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f + volume-a8862558-e6d6-4648-b5df-bb84f31c8935 + +This volume can also be attached and deleted just like other volumes. +However, snapshotting is **not** supported. + +NFS driver notes +~~~~~~~~~~~~~~~~ + +* ``cinder-volume`` manages the mounting of the NFS shares as well as + volume creation on the shares. Keep this in mind when planning your + OpenStack architecture. If you have one master NFS server, it might + make sense to only have one ``cinder-volume`` service to handle all + requests to that NFS server. However, if that single server is unable + to handle all requests, more than one ``cinder-volume`` service is + needed as well as potentially more than one NFS server. + +* Because data is stored in a file and not actually on a block storage + device, you might not see the same IO performance as you would with + a traditional block storage driver. Please test accordingly. + +* Despite possible IO performance loss, having volume data stored in + a file might be beneficial. For example, backing up volumes can be + as easy as copying the volume files. + +.. note:: + + Regular IO flushing and syncing still stands. diff --git a/doc/source/config-reference/block-storage/drivers/nimble-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/nimble-volume-driver.rst new file mode 100644 index 00000000000..1c5763b2084 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/nimble-volume-driver.rst @@ -0,0 +1,134 @@ +============================ +Nimble Storage volume driver +============================ + +Nimble Storage fully integrates with the OpenStack platform through +the Nimble Cinder driver, allowing a host to configure and manage Nimble +Storage array features through Block Storage interfaces. + +Support for iSCSI storage protocol is available with NimbleISCSIDriver +Volume Driver class and Fibre Channel with NimbleFCDriver. + +Support for the Liberty release and above is available from Nimble OS +2.3.8 or later. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, clone, attach, and detach volumes +* Create and delete volume snapshots +* Create a volume from a snapshot +* Copy an image to a volume +* Copy a volume to an image +* Extend a volume +* Get volume statistics +* Manage and unmanage a volume +* Enable encryption and default performance policy for a volume-type + extra-specs +* Force backup of an in-use volume. + +Nimble Storage driver configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Update the file ``/etc/cinder/cinder.conf`` with the given configuration. + +In case of a basic (single back-end) configuration, add the parameters +within the ``[default]`` section as follows. + +.. code-block:: ini + + [default] + san_ip = NIMBLE_MGMT_IP + san_login = NIMBLE_USER + san_password = NIMBLE_PASSWORD + use_multipath_for_image_xfer = True + volume_driver = NIMBLE_VOLUME_DRIVER + +In case of multiple back-end configuration, for example, configuration +which supports multiple Nimble Storage arrays or a single Nimble Storage +array with arrays from other vendors, use the following parameters. + +.. code-block:: ini + + [default] + enabled_backends = Nimble-Cinder + + [Nimble-Cinder] + san_ip = NIMBLE_MGMT_IP + san_login = NIMBLE_USER + san_password = NIMBLE_PASSWORD + use_multipath_for_image_xfer = True + volume_driver = NIMBLE_VOLUME_DRIVER + volume_backend_name = NIMBLE_BACKEND_NAME + +In case of multiple back-end configuration, Nimble Storage volume type +is created and associated with a back-end name as follows. + +.. note:: + + Single back-end configuration users do not need to create the volume type. + +.. code-block:: console + + $ openstack volume type create NIMBLE_VOLUME_TYPE + $ openstack volume type set --property volume_backend_name=NIMBLE_BACKEND_NAME NIMBLE_VOLUME_TYPE + +This section explains the variables used above: + +NIMBLE_MGMT_IP + Management IP address of Nimble Storage array/group. + +NIMBLE_USER + Nimble Storage account login with minimum ``power user`` (admin) privilege + if RBAC is used. + +NIMBLE_PASSWORD + Password of the admin account for nimble array. + +NIMBLE_VOLUME_DRIVER + Use either cinder.volume.drivers.nimble.NimbleISCSIDriver for iSCSI or + cinder.volume.drivers.nimble.NimbleFCDriver for Fibre Channel. + +NIMBLE_BACKEND_NAME + A volume back-end name which is specified in the ``cinder.conf`` file. + This is also used while assigning a back-end name to the Nimble volume type. + +NIMBLE_VOLUME_TYPE + The Nimble volume-type which is created from the CLI and associated with + ``NIMBLE_BACKEND_NAME``. + + .. note:: + + Restart the ``cinder-api``, ``cinder-scheduler``, and ``cinder-volume`` + services after updating the ``cinder.conf`` file. + +Nimble driver extra spec options +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Nimble volume driver also supports the following extra spec options: + +'nimble:encryption'='yes' + Used to enable encryption for a volume-type. + +'nimble:perfpol-name'=PERF_POL_NAME + PERF_POL_NAME is the name of a performance policy which exists on the + Nimble array and should be enabled for every volume in a volume type. + +'nimble:multi-initiator'='true' + Used to enable multi-initiator access for a volume-type. + +These extra-specs can be enabled by using the following command: + +.. code-block:: console + + $ openstack volume type set --property KEY=VALUE VOLUME_TYPE + +``VOLUME_TYPE`` is the Nimble volume type and ``KEY`` and ``VALUE`` are +the options mentioned above. + +Configuration options +~~~~~~~~~~~~~~~~~~~~~ + +The Nimble storage driver supports these configuration options: + +.. include:: ../../tables/cinder-nimble.rst diff --git a/doc/source/config-reference/block-storage/drivers/prophetstor-dpl-driver.rst b/doc/source/config-reference/block-storage/drivers/prophetstor-dpl-driver.rst new file mode 100644 index 00000000000..3f6d21261ac --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/prophetstor-dpl-driver.rst @@ -0,0 +1,104 @@ +=========================================== +ProphetStor Fibre Channel and iSCSI drivers +=========================================== + +ProhetStor Fibre Channel and iSCSI drivers add support for +ProphetStor Flexvisor through the Block Storage service. +ProphetStor Flexvisor enables commodity x86 hardware as software-defined +storage leveraging well-proven ZFS for disk management to provide +enterprise grade storage services such as snapshots, data protection +with different RAID levels, replication, and deduplication. + +The ``DPLFCDriver`` and ``DPLISCSIDriver`` drivers run volume operations +by communicating with the ProphetStor storage system over HTTPS. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. + +* Create, list, and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Clone a volume. + +* Extend a volume. + +Enable the Fibre Channel or iSCSI drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``DPLFCDriver`` and ``DPLISCSIDriver`` are installed with the OpenStack +software. + +#. Query storage pool id to configure ``dpl_pool`` of the ``cinder.conf`` + file. + + a. Log on to the storage system with administrator access. + + .. code-block:: console + + $ ssh root@STORAGE_IP_ADDRESS + + b. View the current usable pool id. + + .. code-block:: console + + $ flvcli show pool list + - d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07 + + c. Use ``d5bd40b58ea84e9da09dcf25a01fdc07`` to configure the ``dpl_pool`` of + ``/etc/cinder/cinder.conf`` file. + + .. note:: + + Other management commands can be referenced with the help command + :command:`flvcli -h`. + +#. Make the following changes on the volume node ``/etc/cinder/cinder.conf`` + file. + + .. code-block:: ini + + # IP address of SAN controller (string value) + san_ip=STORAGE IP ADDRESS + + # Username for SAN controller (string value) + san_login=USERNAME + + # Password for SAN controller (string value) + san_password=PASSWORD + + # Use thin provisioning for SAN volumes? (boolean value) + san_thin_provision=true + + # The port that the iSCSI daemon is listening on. (integer value) + iscsi_port=3260 + + # DPL pool uuid in which DPL volumes are stored. (string value) + dpl_pool=d5bd40b58ea84e9da09dcf25a01fdc07 + + # DPL port number. (integer value) + dpl_port=8357 + + # Uncomment one of the next two option to enable Fibre channel or iSCSI + # FIBRE CHANNEL(uncomment the next line to enable the FC driver) + #volume_driver=cinder.volume.drivers.prophetstor.dpl_fc.DPLFCDriver + # iSCSI (uncomment the next line to enable the iSCSI driver) + #volume_driver=cinder.volume.drivers.prophetstor.dpl_iscsi.DPLISCSIDriver + +#. Save the changes to the ``/etc/cinder/cinder.conf`` file and + restart the ``cinder-volume`` service. + +The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your +OpenStack system. If you experience problems, review the Block Storage +service log files for errors. + +The following table contains the options supported by the ProphetStor +storage driver. + +.. include:: ../../tables/cinder-prophetstor_dpl.rst diff --git a/doc/source/config-reference/block-storage/drivers/pure-storage-driver.rst b/doc/source/config-reference/block-storage/drivers/pure-storage-driver.rst new file mode 100644 index 00000000000..6e39fd1715b --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/pure-storage-driver.rst @@ -0,0 +1,319 @@ +=================================================== +Pure Storage iSCSI and Fibre Channel volume drivers +=================================================== + +The Pure Storage FlashArray volume drivers for OpenStack Block Storage +interact with configured Pure Storage arrays and support various +operations. + +Support for iSCSI storage protocol is available with the PureISCSIDriver +Volume Driver class, and Fibre Channel with PureFCDriver. + +All drivers are compatible with Purity FlashArrays that support the REST +API version 1.2, 1.3, or 1.4 (Purity 4.0.0 and newer). + +Limitations and known issues +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you do not set up the nodes hosting instances to use multipathing, +all network connectivity will use a single physical port on the array. +In addition to significantly limiting the available bandwidth, this +means you do not have the high-availability and non-disruptive upgrade +benefits provided by FlashArray. Multipathing must be used to take advantage +of these benefits. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, detach, retype, clone, and extend volumes. + +* Create a volume from snapshot. + +* Create, list, and delete volume snapshots. + +* Create, list, update, and delete consistency groups. + +* Create, list, and delete consistency group snapshots. + +* Manage and unmanage a volume. + +* Manage and unmanage a snapshot. + +* Get volume statistics. + +* Create a thin provisioned volume. + +* Replicate volumes to remote Pure Storage array(s). + +Configure OpenStack and Purity +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You need to configure both your Purity array and your OpenStack cluster. + +.. note:: + + These instructions assume that the ``cinder-api`` and ``cinder-scheduler`` + services are installed and configured in your OpenStack cluster. + +Configure the OpenStack Block Storage service +--------------------------------------------- + +In these steps, you will edit the ``cinder.conf`` file to configure the +OpenStack Block Storage service to enable multipathing and to use the +Pure Storage FlashArray as back-end storage. + +#. Install Pure Storage PyPI module. + A requirement for the Pure Storage driver is the installation of the + Pure Storage Python SDK version 1.4.0 or later from PyPI. + + .. code-block:: console + + $ pip install purestorage + +#. Retrieve an API token from Purity. + The OpenStack Block Storage service configuration requires an API token + from Purity. Actions performed by the volume driver use this token for + authorization. Also, Purity logs the volume driver's actions as being + performed by the user who owns this API token. + + If you created a Purity user account that is dedicated to managing your + OpenStack Block Storage volumes, copy the API token from that user + account. + + Use the appropriate create or list command below to display and copy the + Purity API token: + + * To create a new API token: + + .. code-block:: console + + $ pureadmin create --api-token USER + + The following is an example output: + + .. code-block:: console + + $ pureadmin create --api-token pureuser + Name API Token Created + pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30 + + * To list an existing API token: + + .. code-block:: console + + $ pureadmin list --api-token --expose USER + + The following is an example output: + + .. code-block:: console + + $ pureadmin list --api-token --expose pureuser + Name API Token Created + pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30 + +#. Copy the API token retrieved (``902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9`` from + the examples above) to use in the next step. + +#. Edit the OpenStack Block Storage service configuration file. + The following sample ``/etc/cinder/cinder.conf`` configuration lists the + relevant settings for a typical Block Storage service using a single + Pure Storage array: + + .. code-block:: ini + + [DEFAULT] + enabled_backends = puredriver-1 + default_volume_type = puredriver-1 + + [puredriver-1] + volume_backend_name = puredriver-1 + volume_driver = PURE_VOLUME_DRIVER + san_ip = IP_PURE_MGMT + pure_api_token = PURE_API_TOKEN + use_multipath_for_image_xfer = True + + Replace the following variables accordingly: + + PURE_VOLUME_DRIVER + Use either ``cinder.volume.drivers.pure.PureISCSIDriver`` for iSCSI or + ``cinder.volume.drivers.pure.PureFCDriver`` for Fibre Channel + connectivity. + + IP_PURE_MGMT + The IP address of the Pure Storage array's management interface or a + domain name that resolves to that IP address. + + PURE_API_TOKEN + The Purity Authorization token that the volume driver uses to + perform volume management on the Pure Storage array. + +.. note:: + + The volume driver automatically creates Purity host objects for + initiators as needed. If CHAP authentication is enabled via the + ``use_chap_auth`` setting, you must ensure there are no manually + created host objects with IQN's that will be used by the OpenStack + Block Storage service. The driver will only modify credentials on hosts that + it manages. + +.. note:: + + If using the PureFCDriver it is recommended to use the OpenStack + Block Storage Fibre Channel Zone Manager. + +Volume auto-eradication +~~~~~~~~~~~~~~~~~~~~~~~ + +To enable auto-eradication of deleted volumes, snapshots, and consistency +groups on deletion, modify the following option in the ``cinder.conf`` file: + +.. code-block:: ini + + pure_eradicate_on_delete = true + +By default, auto-eradication is disabled and all deleted volumes, snapshots, +and consistency groups are retained on the Pure Storage array in a recoverable +state for 24 hours from time of deletion. + +SSL certification +~~~~~~~~~~~~~~~~~ + +To enable SSL certificate validation, modify the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + driver_ssl_cert_verify = true + +By default, SSL certificate validation is disabled. + +To specify a non-default path to ``CA_Bundle`` file or directory with +certificates of trusted CAs: + + +.. code-block:: ini + + driver_ssl_cert_path = Certificate path + +.. note:: + + This requires the use of Pure Storage Python SDK > 1.4.0. + +Replication configuration +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add the following to the back-end specification to specify another Flash +Array to replicate to: + +.. code-block:: ini + + [puredriver-1] + replication_device = backend_id:PURE2_NAME,san_ip:IP_PURE2_MGMT,api_token:PURE2_API_TOKEN + +Where ``PURE2_NAME`` is the name of the remote Pure Storage system, +``IP_PURE2_MGMT`` is the management IP address of the remote array, +and ``PURE2_API_TOKEN`` is the Purity Authorization token +of the remote array. + +Note that more than one ``replication_device`` line can be added to allow for +multi-target device replication. + +A volume is only replicated if the volume is of a volume-type that has +the extra spec ``replication_enabled`` set to `` True``. + +To create a volume type that specifies replication to remote back ends: + +.. code-block:: console + + $ openstack volume type create ReplicationType + $ openstack volume type set --property replication_enabled=' True' ReplicationType + +The following table contains the optional configuration parameters available +for replication configuration with the Pure Storage array. + +==================================================== ============= ====== +Option Description Default +==================================================== ============= ====== +``pure_replica_interval_default`` Snapshot + replication + interval in + seconds. ``900`` +``pure_replica_retention_short_term_default`` Retain all + snapshots on + target for + this time + (in seconds). ``14400`` +``pure_replica_retention_long_term_per_day_default`` Retain how + many + snapshots + for each + day. ``3`` +``pure_replica_retention_long_term_default`` Retain + snapshots + per day + on target + for this + time (in + days). ``7`` +==================================================== ============= ====== + + +.. note:: + + ``replication-failover`` is only supported from the primary array to any of the + multiple secondary arrays, but subsequent ``replication-failover`` is only + supported back to the original primary array. + +Automatic thin-provisioning/oversubscription ratio +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To enable this feature where we calculate the array oversubscription ratio as +(total provisioned/actual used), add the following option in the +``cinder.conf`` file: + +.. code-block:: ini + + [puredriver-1] + pure_automatic_max_oversubscription_ratio = True + +By default, this is disabled and we honor the hard-coded configuration option +``max_over_subscription_ratio``. + +.. note:: + + Arrays with very good data reduction rates (compression/data deduplication/thin provisioning) + can get *very* large oversubscription rates applied. + +Scheduling metrics +~~~~~~~~~~~~~~~~~~ + +A large number of metrics are reported by the volume driver which can be useful +in implementing more control over volume placement in multi-backend +environments using the driver filter and weighter methods. + +Metrics reported include, but are not limited to: + +.. code-block:: text + + total_capacity_gb + free_capacity_gb + provisioned_capacity + total_volumes + total_snapshots + total_hosts + total_pgroups + writes_per_sec + reads_per_sec + input_per_sec + output_per_sec + usec_per_read_op + usec_per_read_op + queue_depth + +.. note:: + + All total metrics include non-OpenStack managed objects on the array. + +In conjunction with QOS extra-specs, you can create very complex algorithms to +manage volume placement. More detailed documentation on this is available in +other external documentation. diff --git a/doc/source/config-reference/block-storage/drivers/quobyte-driver.rst b/doc/source/config-reference/block-storage/drivers/quobyte-driver.rst new file mode 100644 index 00000000000..f0280a8b1f2 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/quobyte-driver.rst @@ -0,0 +1,61 @@ +============== +Quobyte driver +============== + +The `Quobyte `__ volume driver enables storing Block +Storage service volumes on a Quobyte storage back end. Block Storage service +back ends are mapped to Quobyte volumes and individual Block Storage service +volumes are stored as files on a Quobyte volume. Selection of the appropriate +Quobyte volume is done by the aforementioned back end configuration that +specifies the Quobyte volume explicitly. + +.. note:: + + Note the dual use of the term ``volume`` in the context of Block Storage + service volumes and in the context of Quobyte volumes. + +For more information see `the Quobyte support webpage +`__. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The Quobyte volume driver supports the following volume operations: + +- Create, delete, attach, and detach volumes + +- Secure NAS operation (Starting with Mitaka release secure NAS operation is + optional but still default) + +- Create and delete a snapshot + +- Create a volume from a snapshot + +- Extend a volume + +- Clone a volume + +- Copy a volume to image + +- Generic volume migration (no back end optimization) + +.. note:: + + When running VM instances off Quobyte volumes, ensure that the `Quobyte + Compute service driver `__ + has been configured in your OpenStack cloud. + +Configuration +~~~~~~~~~~~~~ + +To activate the Quobyte volume driver, configure the corresponding +``volume_driver`` parameter: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver + +The following table contains the configuration options supported by the +Quobyte driver: + +.. include:: ../../tables/cinder-quobyte.rst diff --git a/doc/source/config-reference/block-storage/drivers/scality-sofs-driver.rst b/doc/source/config-reference/block-storage/drivers/scality-sofs-driver.rst new file mode 100644 index 00000000000..2acb1be6efa --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/scality-sofs-driver.rst @@ -0,0 +1,68 @@ +=================== +Scality SOFS driver +=================== + +The Scality SOFS volume driver interacts with configured sfused mounts. + +The Scality SOFS driver manages volumes as sparse files stored on a +Scality Ring through sfused. Ring connection settings and sfused options +are defined in the ``cinder.conf`` file and the configuration file +pointed to by the ``scality_sofs_config`` option, typically +``/etc/sfused.conf``. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The Scality SOFS volume driver provides the following Block Storage +volume operations: + +- Create, delete, attach (map), and detach (unmap) volumes. + +- Create, list, and delete volume snapshots. + +- Create a volume from a snapshot. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Clone a volume. + +- Extend a volume. + +- Backup a volume. + +- Restore backup to new or existing volume. + +Configuration +~~~~~~~~~~~~~ + +Use the following instructions to update the ``cinder.conf`` +configuration file: + +.. code-block:: ini + + [DEFAULT] + enabled_backends = scality-1 + + [scality-1] + volume_driver = cinder.volume.drivers.scality.ScalityDriver + volume_backend_name = scality-1 + + scality_sofs_config = /etc/sfused.conf + scality_sofs_mount_point = /cinder + scality_sofs_volume_dir = cinder/volumes + +Compute configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following instructions to update the ``nova.conf`` configuration +file: + +.. code-block:: ini + + [libvirt] + scality_sofs_mount_point = /cinder + scality_sofs_config = /etc/sfused.conf + +.. include:: ../../tables/cinder-scality.rst diff --git a/doc/source/config-reference/block-storage/drivers/sheepdog-driver.rst b/doc/source/config-reference/block-storage/drivers/sheepdog-driver.rst new file mode 100644 index 00000000000..775b090f2cf --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/sheepdog-driver.rst @@ -0,0 +1,48 @@ +=============== +Sheepdog driver +=============== + +Sheepdog is an open-source distributed storage system that provides a +virtual storage pool utilizing internal disk of commodity servers. + +Sheepdog scales to several hundred nodes, and has powerful virtual disk +management features like snapshotting, cloning, rollback, and thin +provisioning. + +More information can be found on `Sheepdog +Project `__. + +This driver enables the use of Sheepdog through Qemu/KVM. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +Sheepdog driver supports these operations: + +- Create, delete, attach, and detach volumes. + +- Create, list, and delete volume snapshots. + +- Create a volume from a snapshot. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Clone a volume. + +- Extend a volume. + +Configuration +~~~~~~~~~~~~~ + +Set the following option in the ``cinder.conf`` file: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver + +The following table contains the configuration options supported by the +Sheepdog driver: + +.. include:: ../../tables/cinder-sheepdog.rst diff --git a/doc/source/config-reference/block-storage/drivers/smbfs-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/smbfs-volume-driver.rst new file mode 100644 index 00000000000..e0f12974218 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/smbfs-volume-driver.rst @@ -0,0 +1,17 @@ +============== +SambaFS driver +============== + +There is a volume back-end for Samba filesystems. Set the following in +your ``cinder.conf`` file, and use the following options to configure it. + +.. note:: + + The SambaFS driver requires ``qemu-img`` version 1.7 or higher on Linux + nodes, and ``qemu-img`` version 1.6 or higher on Windows nodes. + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver + +.. include:: ../../tables/cinder-smbfs.rst diff --git a/doc/source/config-reference/block-storage/drivers/solidfire-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/solidfire-volume-driver.rst new file mode 100644 index 00000000000..7ddaa870ded --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/solidfire-volume-driver.rst @@ -0,0 +1,104 @@ +========= +SolidFire +========= + +The SolidFire Cluster is a high performance all SSD iSCSI storage device that +provides massive scale out capability and extreme fault tolerance. A key +feature of the SolidFire cluster is the ability to set and modify during +operation specific QoS levels on a volume for volume basis. The SolidFire +cluster offers this along with de-duplication, compression, and an architecture +that takes full advantage of SSDs. + +To configure the use of a SolidFire cluster with Block Storage, modify your +``cinder.conf`` file as follows: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver + san_ip = 172.17.1.182 # the address of your MVIP + san_login = sfadmin # your cluster admin login + san_password = sfpassword # your cluster admin password + sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster + +.. warning:: + + Older versions of the SolidFire driver (prior to Icehouse) created a unique + account prefixed with ``$cinder-volume-service-hostname-$tenant-id`` on the + SolidFire cluster for each tenant. Unfortunately, this account formation + resulted in issues for High Availability (HA) installations and + installations where the ``cinder-volume`` service can move to a new node. + The current default implementation does not experience this issue as no + prefix is used. For installations created on a prior release, the OLD + default behavior can be configured by using the keyword ``hostname`` in + sf_account_prefix. + +.. note:: + + The SolidFire driver creates names for volumes on the back end using the + format UUID-. This works well, but there is a possibility of a + UUID collision for customers running multiple clouds against the same + cluster. In Mitaka the ability was added to eliminate the possibility of + collisions by introducing the **sf_volume_prefix** configuration variable. + On the SolidFire cluster each volume will be labeled with the prefix, + providing the ability to configure unique volume names for each cloud. + The default prefix is 'UUID-'. + + Changing the setting on an existing deployment will result in the existing + volumes being inaccessible. To introduce this change to an existing + deployment it is recommended to add the Cluster as if it were a second + backend and disable new deployments to the current back end. + +.. include:: ../../tables/cinder-solidfire.rst + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, attach, and detach volumes. +* Create, list, and delete volume snapshots. +* Create a volume from a snapshot. +* Copy an image to a volume. +* Copy a volume to an image. +* Clone a volume. +* Extend a volume. +* Retype a volume. +* Manage and unmanage a volume. +* Consistency group snapshots. + +QoS support for the SolidFire drivers includes the ability to set the +following capabilities in the OpenStack Block Storage API +``cinder.api.contrib.qos_specs_manage`` qos specs extension module: + +* **minIOPS** - The minimum number of IOPS guaranteed for this volume. + Default = 100. + +* **maxIOPS** - The maximum number of IOPS allowed for this volume. + Default = 15,000. + +* **burstIOPS** - The maximum number of IOPS allowed over a short period of + time. Default = 15,000. + +* **scaledIOPS** - The presence of this key is a flag indicating that the + above IOPS should be scaled by the following scale values. It is recommended + to set the value of scaledIOPS to True, but any value will work. The + absence of this key implies false. + +* **scaleMin** - The amount to scale the minIOPS by for every 1GB of + additional volume size. The value must be an integer. + +* **scaleMax** - The amount to scale the maxIOPS by for every 1GB of additional + volume size. The value must be an integer. + +* **scaleBurst** - The amount to scale the burstIOPS by for every 1GB of + additional volume size. The value must be an integer. + +The QoS keys above no longer require to be scoped but must be created and +associated to a volume type. For information about how to set the key-value +pairs and associate them with a volume type, see the `volume qos +`_ +section in the OpenStackClient command list. + +.. note:: + + When using scaledIOPS, the scale values must be chosen such that the + constraint minIOPS <= maxIOPS <= burstIOPS is always true. The driver will + enforce this constraint. diff --git a/doc/source/config-reference/block-storage/drivers/synology-dsm-driver.rst b/doc/source/config-reference/block-storage/drivers/synology-dsm-driver.rst new file mode 100755 index 00000000000..72e1c6a0d5c --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/synology-dsm-driver.rst @@ -0,0 +1,124 @@ +========================== +Synology DSM volume driver +========================== + +The ``SynoISCSIDriver`` volume driver allows Synology NAS to be used for Block +Storage (cinder) in OpenStack deployments. Information on OpenStack Block +Storage volumes is available in the DSM Storage Manager. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +The Synology driver has the following requirements: + +* DSM version 6.0.2 or later. + +* Your Synology NAS model must support advanced file LUN, iSCSI Target, and + snapshot features. Refer to the `Support List for applied models + `_. + +.. note:: + + The DSM driver is available in the OpenStack Newton release. + + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +* Create, delete, clone, attach, and detach volumes. + +* Create and delete volume snapshots. + +* Create a volume from a snapshot. + +* Copy an image to a volume. + +* Copy a volume to an image. + +* Extend a volume. + +* Get volume statistics. + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +Edit the ``/etc/cinder/cinder.conf`` file on your volume driver host. + +Synology driver uses a volume in Synology NAS as the back end of Block Storage. +Every time you create a new Block Storage volume, the system will create an +advanced file LUN in your Synology volume to be used for this new Block Storage +volume. + +The following example shows how to use different Synology NAS servers as the +back end. If you want to use all volumes on your Synology NAS, add another +section with the volume number to differentiate between volumes within the same +Synology NAS. + +.. code-block:: ini + + [default] + enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others + + [ds1515pV1] + # configuration for volume 1 in DS1515+ + + [ds1515pV2] + # configuration for volume 2 in DS1515+ + + [rs3017xsV1] + # configuration for volume 1 in RS3017xs + +Each section indicates the volume number and the way in which the connection is +established. Below is an example of a basic configuration: + +.. code-block:: ini + + [Your_Section_Name] + + # Required settings + volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver + iscs_protocol = iscsi + iscsi_ip_address = DS_IP + synology_admin_port = DS_PORT + synology_username = DS_USER + synology_password = DS_PW + synology_pool_name = DS_VOLUME + + # Optional settings + volume_backend_name = VOLUME_BACKEND_NAME + iscsi_secondary_ip_addresses = IP_ADDRESSES + driver_use_ssl = True + use_chap_auth = True + chap_username = CHAP_USER_NAME + chap_password = CHAP_PASSWORD + +``DS_PORT`` + This is the port for DSM management. The default value for DSM is 5000 + (HTTP) and 5001 (HTTPS). To use HTTPS connections, you must set + ``driver_use_ssl = True``. + +``DS_IP`` + This is the IP address of your Synology NAS. + +``DS_USER`` + This is the account of any DSM administrator. + +``DS_PW`` + This is the password for ``DS_USER``. + +``DS_VOLUME`` + This is the volume you want to use as the storage pool for the Block + Storage service. The format is ``volume[0-9]+``, and the number is the same + as the volume number in DSM. + +.. note:: + + If you set ``driver_use_ssl`` as ``True``, ``synology_admin_port`` must be + an HTTPS port. + +Configuration options +~~~~~~~~~~~~~~~~~~~~~ + +The Synology DSM driver supports the following configuration options: + +.. include:: ../../tables/cinder-synology.rst diff --git a/doc/source/config-reference/block-storage/drivers/tintri-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/tintri-volume-driver.rst new file mode 100644 index 00000000000..453a82abfb3 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/tintri-volume-driver.rst @@ -0,0 +1,81 @@ +====== +Tintri +====== + +Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and +virtualization. The Tintri Block Storage driver interacts with configured +VMstore running Tintri OS 4.0 and above. It supports various operations using +Tintri REST APIs and NFS protocol. + +To configure the use of a Tintri VMstore with Block Storage, perform the +following actions: + +#. Edit the ``etc/cinder/cinder.conf`` file and set the + ``cinder.volume.drivers.tintri`` options: + + .. code-block:: ini + + volume_driver=cinder.volume.drivers.tintri.TintriDriver + # Mount options passed to the nfs client. See section of the + # nfs man page for details. (string value) + nfs_mount_options = vers=3,lookupcache=pos + + # + # Options defined in cinder.volume.drivers.tintri + # + + # The hostname (or IP address) for the storage system (string + # value) + tintri_server_hostname = {Tintri VMstore Management IP} + + # User name for the storage system (string value) + tintri_server_username = {username} + + # Password for the storage system (string value) + tintri_server_password = {password} + + # API version for the storage system (string value) + # tintri_api_version = v310 + + # Following options needed for NFS configuration + # File with the list of available nfs shares (string value) + # nfs_shares_config = /etc/cinder/nfs_shares + + # Tintri driver will clean up unused image snapshots. With the following + # option, users can configure how long unused image snapshots are + # retained. Default retention policy is 30 days + # tintri_image_cache_expiry_days = 30 + + # Path to NFS shares file storing images. + # Users can store Glance images in the NFS share of the same VMstore + # mentioned in the following file. These images need to have additional + # metadata ``provider_location`` configured in Glance, which should point + # to the NFS share path of the image. + # This option will enable Tintri driver to directly clone from Glance + # image stored on same VMstore (rather than downloading image + # from Glance) + # tintri_image_shares_config = + # + # For example: + # Glance image metadata + # provider_location => + # nfs:///tintri/glance/84829294-c48b-4e16-a878-8b2581efd505 + +#. Edit the ``/etc/nova/nova.conf`` file and set the ``nfs_mount_options``: + + .. code-block:: ini + + [libvirt] + nfs_mount_options = vers=3 + +#. Edit the ``/etc/cinder/nfs_shares`` file and add the Tintri VMstore mount + points associated with the configured VMstore management IP in the + ``cinder.conf`` file: + + .. code-block:: bash + + {vmstore_data_ip}:/tintri/{submount1} + {vmstore_data_ip}:/tintri/{submount2} + + +.. include:: ../../tables/cinder-tintri.rst diff --git a/doc/source/config-reference/block-storage/drivers/violin-v7000-driver.rst b/doc/source/config-reference/block-storage/drivers/violin-v7000-driver.rst new file mode 100644 index 00000000000..69df6af0068 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/violin-v7000-driver.rst @@ -0,0 +1,107 @@ +=========================================== +Violin Memory 7000 Series FSP volume driver +=========================================== + +The OpenStack V7000 driver package from Violin Memory adds Block Storage +service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP +controllers. + +The driver package release can be used with any OpenStack Liberty deployment +for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later +using Fibre Channel HBAs. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the Violin driver, the following are required: + +- Violin 7300/7700 series FSP with: + + - Concerto OS version 7.5.3 or later + + - Fibre channel host interfaces + +- The Violin block storage driver: This driver implements the block storage API + calls. The driver is included with the OpenStack Liberty release. + +- The vmemclient library: This is the Violin Array Communications library to + the Flash Storage Platform through a REST-like interface. The client can be + installed using the python 'pip' installer tool. Further information on + vmemclient can be found on `PyPI + `__. + + .. code-block:: console + + pip install vmemclient + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. + +- Create, list, and delete volume snapshots. + +- Create a volume from a snapshot. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Clone a volume. + +- Extend a volume. + +.. note:: + + Listed operations are supported for thick, thin, and dedup luns, + with the exception of cloning. Cloning operations are supported only + on thick luns. + +Driver configuration +~~~~~~~~~~~~~~~~~~~~ + +Once the array is configured as per the installation guide, it is simply a +matter of editing the cinder configuration file to add or modify the +parameters. The driver currently only supports fibre channel configuration. + +Fibre channel configuration +--------------------------- + +Set the following in your ``cinder.conf`` configuration file, replacing the +variables using the guide in the following section: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver + volume_backend_name = vmem_violinfsp + extra_capabilities = VMEM_CAPABILITIES + san_ip = VMEM_MGMT_IP + san_login = VMEM_USER_NAME + san_password = VMEM_PASSWORD + use_multipath_for_image_xfer = true + +Configuration parameters +------------------------ + +Description of configuration value placeholders: + +VMEM_CAPABILITIES + User defined capabilities, a JSON formatted string specifying key-value + pairs (string value). The ones particularly supported are + ``dedup`` and ``thin``. Only these two capabilities are listed here in + ``cinder.conf`` file, indicating this backend be selected for creating + luns which have a volume type associated with them that have ``dedup`` + or ``thin`` extra_specs specified. For example, if the FSP is configured + to support dedup luns, set the associated driver capabilities + to: {"dedup":"True","thin":"True"}. + +VMEM_MGMT_IP + External IP address or host name of the Violin 7300 Memory Gateway. This + can be an IP address or host name. + +VMEM_USER_NAME + Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller. + This user must have administrative rights on the array or controller. + +VMEM_PASSWORD + Log-in user's password. diff --git a/doc/source/config-reference/block-storage/drivers/vmware-vmdk-driver.rst b/doc/source/config-reference/block-storage/drivers/vmware-vmdk-driver.rst new file mode 100644 index 00000000000..58b5e9a6945 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/vmware-vmdk-driver.rst @@ -0,0 +1,347 @@ +.. _block_storage_vmdk_driver: + +================== +VMware VMDK driver +================== + +Use the VMware VMDK driver to enable management of the OpenStack Block Storage +volumes on vCenter-managed data stores. Volumes are backed by VMDK files on +data stores that use any VMware-compatible storage technology such as NFS, +iSCSI, FiberChannel, and vSAN. + +.. note:: + + The VMware VMDK driver requires vCenter version 5.1 at minimum. + +Functional context +~~~~~~~~~~~~~~~~~~ + +The VMware VMDK driver connects to vCenter, through which it can dynamically +access all the data stores visible from the ESX hosts in the managed cluster. + +When you create a volume, the VMDK driver creates a VMDK file on demand. The +VMDK file creation completes only when the volume is subsequently attached to +an instance. The reason for this requirement is that data stores visible to the +instance determine where to place the volume. Before the service creates the +VMDK file, attach a volume to the target instance. + +The running vSphere VM is automatically reconfigured to attach the VMDK file as +an extra disk. Once attached, you can log in to the running vSphere VM to +rescan and discover this extra disk. + +With the update to ESX version 6.0, the VMDK driver now supports NFS version +4.1. + +Configuration +~~~~~~~~~~~~~ + +The recommended volume driver for OpenStack Block Storage is the VMware vCenter +VMDK driver. When you configure the driver, you must match it with the +appropriate OpenStack Compute driver from VMware and both drivers must point to +the same server. + +In the ``nova.conf`` file, use this option to define the Compute driver: + +.. code-block:: ini + + compute_driver = vmwareapi.VMwareVCDriver + +In the ``cinder.conf`` file, use this option to define the volume +driver: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver + +The following table lists various options that the drivers support for the +OpenStack Block Storage configuration (``cinder.conf``): + +.. include:: ../../tables/cinder-vmware.rst + +VMDK disk type +~~~~~~~~~~~~~~ + +The VMware VMDK drivers support the creation of VMDK disk file types ``thin``, +``lazyZeroedThick`` (sometimes called thick or flat), or ``eagerZeroedThick``. + +A thin virtual disk is allocated and zeroed on demand as the space is used. +Unused space on a Thin disk is available to other users. + +A lazy zeroed thick virtual disk will have all space allocated at disk +creation. This reserves the entire disk space, so it is not available to other +users at any time. + +An eager zeroed thick virtual disk is similar to a lazy zeroed thick disk, in +that the entire disk is allocated at creation. However, in this type, any +previous data will be wiped clean on the disk before the write. This can mean +that the disk will take longer to create, but can also prevent issues with +stale data on physical media. + +Use the ``vmware:vmdk_type`` extra spec key with the appropriate value to +specify the VMDK disk file type. This table shows the mapping between the extra +spec entry and the VMDK disk file type: + +.. list-table:: Extra spec entry to VMDK disk file type mapping + :header-rows: 1 + + * - Disk file type + - Extra spec key + - Extra spec value + * - thin + - ``vmware:vmdk_type`` + - ``thin`` + * - lazyZeroedThick + - ``vmware:vmdk_type`` + - ``thick`` + * - eagerZeroedThick + - ``vmware:vmdk_type`` + - ``eagerZeroedThick`` + +If you do not specify a ``vmdk_type`` extra spec entry, the disk file type will +default to ``thin``. + +The following example shows how to create a ``lazyZeroedThick`` VMDK volume by +using the appropriate ``vmdk_type``: + +.. code-block:: console + + $ openstack volume type create THICK_VOLUME + $ openstack volume type set --property vmware:vmdk_type=thick THICK_VOLUME + $ openstack volume create --size 1 --type THICK_VOLUME VOLUME1 + +Clone type +~~~~~~~~~~ + +With the VMware VMDK drivers, you can create a volume from another +source volume or a snapshot point. The VMware vCenter VMDK driver +supports the ``full`` and ``linked/fast`` clone types. Use the +``vmware:clone_type`` extra spec key to specify the clone type. The +following table captures the mapping for clone types: + +.. list-table:: Extra spec entry to clone type mapping + :header-rows: 1 + + * - Clone type + - Extra spec key + - Extra spec value + * - full + - ``vmware:clone_type`` + - ``full`` + * - linked/fast + - ``vmware:clone_type`` + - ``linked`` + +If you do not specify the clone type, the default is ``full``. + +The following example shows linked cloning from a source volume, which is +created from an image: + +.. code-block:: console + + $ openstack volume type create FAST_CLONE + $ openstack volume type set --property vmware:clone_type=linked FAST_CLONE + $ openstack volume create --size 1 --type FAST_CLONE --image MYIMAGE SOURCE_VOL + $ openstack volume create --size 1 --source SOURCE_VOL DEST_VOL + +Use vCenter storage policies to specify back-end data stores +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to configure back-end data stores using storage +policies. In vCenter 5.5 and greater, you can create one or more storage +policies and expose them as a Block Storage volume-type to a vmdk volume. The +storage policies are exposed to the vmdk driver through the extra spec property +with the ``vmware:storage_profile`` key. + +For example, assume a storage policy in vCenter named ``gold_policy.`` and a +Block Storage volume type named ``vol1`` with the extra spec key +``vmware:storage_profile`` set to the value ``gold_policy``. Any Block Storage +volume creation that uses the ``vol1`` volume type places the volume only in +data stores that match the ``gold_policy`` storage policy. + +The Block Storage back-end configuration for vSphere data stores is +automatically determined based on the vCenter configuration. If you configure a +connection to connect to vCenter version 5.5 or later in the ``cinder.conf`` +file, the use of storage policies to configure back-end data stores is +automatically supported. + +.. note:: + + You must configure any data stores that you configure for the Block + Storage service for the Compute service. + +**To configure back-end data stores by using storage policies** + +#. In vCenter, tag the data stores to be used for the back end. + + OpenStack also supports policies that are created by using vendor-specific + capabilities; for example vSAN-specific storage policies. + + .. note:: + + The tag value serves as the policy. For details, see :ref:`vmware-spbm`. + +#. Set the extra spec key ``vmware:storage_profile`` in the desired Block + Storage volume types to the policy name that you created in the previous + step. + +#. Optionally, for the ``vmware_host_version`` parameter, enter the version + number of your vSphere platform. For example, ``5.5``. + + This setting overrides the default location for the corresponding WSDL file. + Among other scenarios, you can use this setting to prevent WSDL error + messages during the development phase or to work with a newer version of + vCenter. + +#. Complete the other vCenter configuration parameters as appropriate. + +.. note:: + + Any volume that is created without an associated policy (that is to say, + without an associated volume type that specifies ``vmware:storage_profile`` + extra spec), there is no policy-based placement for that volume. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +The VMware vCenter VMDK driver supports these operations: + +- Create, delete, attach, and detach volumes. + + .. note:: + + When a volume is attached to an instance, a reconfigure operation is + performed on the instance to add the volume's VMDK to it. The user must + manually rescan and mount the device from within the guest operating + system. + +- Create, list, and delete volume snapshots. + + .. note:: + + Allowed only if volume is not attached to an instance. + +- Create a volume from a snapshot. + + .. note:: + + The vmdk UUID in vCenter will not be set to the volume UUID if the + vCenter version is 6.0 or above and the extra spec key ``vmware:clone_type`` + in the destination volume type is set to ``linked``. + +- Copy an image to a volume. + + .. note:: + + Only images in ``vmdk`` disk format with ``bare`` container format are + supported. The ``vmware_disktype`` property of the image can be + ``preallocated``, ``sparse``, ``streamOptimized`` or ``thin``. + +- Copy a volume to an image. + + .. note:: + + - Allowed only if the volume is not attached to an instance. + - This operation creates a ``streamOptimized`` disk image. + +- Clone a volume. + + .. note:: + + - Supported only if the source volume is not attached to an instance. + - The vmdk UUID in vCenter will not be set to the volume UUID if the + vCenter version is 6.0 or above and the extra spec key ``vmware:clone_type`` + in the destination volume type is set to ``linked``. + +- Backup a volume. + + .. note:: + + This operation creates a backup of the volume in ``streamOptimized`` + disk format. + +- Restore backup to new or existing volume. + + .. note:: + + Supported only if the existing volume doesn't contain snapshots. + +- Change the type of a volume. + + .. note:: + + This operation is supported only if the volume state is ``available``. + +- Extend a volume. + + +.. _vmware-spbm: + +Storage policy-based configuration in vCenter +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can configure Storage Policy-Based Management (SPBM) profiles for vCenter +data stores supporting the Compute, Image service, and Block Storage components +of an OpenStack implementation. + +In a vSphere OpenStack deployment, SPBM enables you to delegate several data +stores for storage, which reduces the risk of running out of storage space. The +policy logic selects the data store based on accessibility and available +storage space. + +Prerequisites +~~~~~~~~~~~~~ + +- Determine the data stores to be used by the SPBM policy. + +- Determine the tag that identifies the data stores in the OpenStack component + configuration. + +- Create separate policies or sets of data stores for separate + OpenStack components. + +Create storage policies in vCenter +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. In vCenter, create the tag that identifies the data stores: + + #. From the :guilabel:`Home` screen, click :guilabel:`Tags`. + + #. Specify a name for the tag. + + #. Specify a tag category. For example, ``spbm-cinder``. + +#. Apply the tag to the data stores to be used by the SPBM policy. + + .. note:: + + For details about creating tags in vSphere, see the `vSphere + documentation + `__. + +#. In vCenter, create a tag-based storage policy that uses one or more tags to + identify a set of data stores. + + .. note:: + + For details about creating storage policies in vSphere, see the `vSphere + documentation + `__. + +Data store selection +~~~~~~~~~~~~~~~~~~~~ + +If storage policy is enabled, the driver initially selects all the data stores +that match the associated storage policy. + +If two or more data stores match the storage policy, the driver chooses a data +store that is connected to the maximum number of hosts. + +In case of ties, the driver chooses the data store with lowest space +utilization, where space utilization is defined by the +``(1-freespace/totalspace)`` meters. + +These actions reduce the number of volume migrations while attaching the volume +to instances. + +The volume must be migrated if the ESX host for the instance cannot access the +data store that contains the volume. diff --git a/doc/source/config-reference/block-storage/drivers/vzstorage-driver.rst b/doc/source/config-reference/block-storage/drivers/vzstorage-driver.rst new file mode 100644 index 00000000000..79411666ca7 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/vzstorage-driver.rst @@ -0,0 +1,14 @@ +======================== +Virtuozzo Storage driver +======================== + +The Virtuozzo Storage driver is a fault-tolerant distributed storage +system that is optimized for virtualization workloads. +Set the following in your ``cinder.conf`` file, and use the following +options to configure it. + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver + +.. include:: ../../tables/cinder-vzstorage.rst diff --git a/doc/source/config-reference/block-storage/drivers/windows-iscsi-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/windows-iscsi-volume-driver.rst new file mode 100644 index 00000000000..709bdd3990f --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/windows-iscsi-volume-driver.rst @@ -0,0 +1,122 @@ +=========================== +Windows iSCSI volume driver +=========================== + +Windows Server 2012 and Windows Storage Server 2012 offer an integrated iSCSI +Target service that can be used with OpenStack Block Storage in your stack. +Being entirely a software solution, consider it in particular for mid-sized +networks where the costs of a SAN might be excessive. + +The Windows Block Storage driver works with OpenStack Compute on any +hypervisor. It includes snapshotting support and the ``boot from volume`` +feature. + +This driver creates volumes backed by fixed-type VHD images on Windows Server +2012 and dynamic-type VHDX on Windows Server 2012 R2, stored locally on a +user-specified path. The system uses those images as iSCSI disks and exports +them through iSCSI targets. Each volume has its own iSCSI target. + +This driver has been tested with Windows Server 2012 and Windows Server R2 +using the Server and Storage Server distributions. + +Install the ``cinder-volume`` service as well as the required Python components +directly onto the Windows node. + +You may install and configure ``cinder-volume`` and its dependencies manually +using the following guide or you may use the ``Cinder Volume Installer``, +presented below. + +Installing using the OpenStack cinder volume installer +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In case you want to avoid all the manual setup, you can use Cloudbase +Solutions' installer. You can find it at +https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi. It installs an +independent Python environment, in order to avoid conflicts with existing +applications, dynamically generates a ``cinder.conf`` file based on the +parameters provided by you. + +``cinder-volume`` will be configured to run as a Windows Service, which can +be restarted using: + +.. code-block:: console + + PS C:\> net stop cinder-volume ; net start cinder-volume + +The installer can also be used in unattended mode. More details about how to +use the installer and its features can be found at https://www.cloudbase.it. + +Windows Server configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The required service in order to run ``cinder-volume`` on Windows is +``wintarget``. This will require the iSCSI Target Server Windows feature +to be installed. You can install it by running the following command: + +.. code-block:: console + + PS C:\> Add-WindowsFeature + FS-iSCSITarget-ServerAdd-WindowsFeatureFS-iSCSITarget-Server + +.. note:: + + The Windows Server installation requires at least 16 GB of disk space. The + volumes hosted by this node need the extra space. + +For ``cinder-volume`` to work properly, you must configure NTP as explained +in :ref:`configure-ntp-windows`. + +Next, install the requirements as described in :ref:`windows-requirements`. + +Getting the code +~~~~~~~~~~~~~~~~ + +Git can be used to download the necessary source code. The installer to run Git +on Windows can be downloaded here: + +https://git-for-windows.github.io/ + +Once installed, run the following to clone the OpenStack Block Storage code: + +.. code-block:: console + + PS C:\> git.exe clone https://git.openstack.org/openstack/cinder + +Configure cinder-volume +~~~~~~~~~~~~~~~~~~~~~~~ + +The ``cinder.conf`` file may be placed in ``C:\etc\cinder``. Below is a +configuration sample for using the Windows iSCSI Driver: + +.. code-block:: ini + + [DEFAULT] + auth_strategy = keystone + volume_name_template = volume-%s + volume_driver = cinder.volume.drivers.windows.WindowsDriver + glance_api_servers = IP_ADDRESS:9292 + rabbit_host = IP_ADDRESS + rabbit_port = 5672 + sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder + windows_iscsi_lun_path = C:\iSCSIVirtualDisks + rabbit_password = Passw0rd + logdir = C:\OpenStack\Log\ + image_conversion_dir = C:\ImageConversionDir + debug = True + +The following table contains a reference to the only driver specific +option that will be used by the Block Storage Windows driver: + +.. include:: ../../tables/cinder-windows.rst + +Run cinder-volume +----------------- + +After configuring ``cinder-volume`` using the ``cinder.conf`` file, you may +use the following commands to install and run the service (note that you +must replace the variables with the proper paths): + +.. code-block:: console + + PS C:\> python $CinderClonePath\setup.py install + PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath diff --git a/doc/source/config-reference/block-storage/drivers/xio-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/xio-volume-driver.rst new file mode 100644 index 00000000000..32ebb7e0014 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/xio-volume-driver.rst @@ -0,0 +1,122 @@ +================== +X-IO volume driver +================== + +The X-IO volume driver for OpenStack Block Storage enables ISE products to be +managed by OpenStack Block Storage nodes. This driver can be configured to work +with iSCSI and Fibre Channel storage protocols. The X-IO volume driver allows +the cloud operator to take advantage of ISE features like quality of +service (QoS) and Continuous Adaptive Data Placement (CADP). It also supports +creating thin volumes and specifying volume media affinity. + +Requirements +~~~~~~~~~~~~ + +ISE FW 2.8.0 or ISE FW 3.1.0 is required for OpenStack Block Storage +support. The X-IO volume driver will not work with older ISE FW. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, detach, retype, clone, and extend volumes. +- Create a volume from snapshot. +- Create, list, and delete volume snapshots. +- Manage and unmanage a volume. +- Get volume statistics. +- Create a thin provisioned volume. +- Create volumes with QoS specifications. + +Configure X-IO Volume driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To configure the use of an ISE product with OpenStack Block Storage, modify +your ``cinder.conf`` file as follows. Be careful to use the one that matches +the storage protocol in use: + +Fibre Channel +------------- + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver + san_ip = 1.2.3.4 # the address of your ISE REST management interface + san_login = administrator # your ISE management admin login + san_password = password # your ISE management admin password + +iSCSI +----- + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver + san_ip = 1.2.3.4 # the address of your ISE REST management interface + san_login = administrator # your ISE management admin login + san_password = password # your ISE management admin password + iscsi_ip_address = ionet_ip # ip address to one ISE port connected to the IONET + +Optional configuration parameters +--------------------------------- + +.. include:: ../../tables/cinder-xio.rst + +Multipath +--------- + +The X-IO ISE supports a multipath configuration, but multipath must be enabled +on the compute node (see *ISE Storage Blade Best Practices Guide*). +For more information, see `X-IO Document Library +`__. + +Volume types +------------ + +OpenStack Block Storage uses volume types to help the administrator specify +attributes for volumes. These attributes are called extra-specs. The X-IO +volume driver support the following extra-specs. + +.. list-table:: Extra specs + :header-rows: 1 + + * - Extra-specs name + - Valid values + - Description + * - ``Feature:Raid`` + - 1, 5 + - RAID level for volume. + * - ``Feature:Pool`` + - 1 - n (n being number of pools on ISE) + - Pool to create volume in. + * - ``Affinity:Type`` + - cadp, flash, hdd + - Volume media affinity type. + * - ``Alloc:Type`` + - 0 (thick), 1 (thin) + - Allocation type for volume. Thick or thin. + * - ``QoS:minIOPS`` + - n (value less than maxIOPS) + - Minimum IOPS setting for volume. + * - ``QoS:maxIOPS`` + - n (value bigger than minIOPS) + - Maximum IOPS setting for volume. + * - ``QoS:burstIOPS`` + - n (value bigger than minIOPS) + - Burst IOPS setting for volume. + +Examples +-------- + +Create a volume type called xio1-flash for volumes that should reside on ssd +storage: + +.. code-block:: console + + $ openstack volume type create xio1-flash + $ openstack volume type set --property Affinity:Type=flash xio1-flash + +Create a volume type called xio1 and set QoS min and max: + +.. code-block:: console + + $ openstack volume type create xio1 + $ openstack volume type set --property QoS:minIOPS=20 xio1 + $ openstack volume type set --property QoS:maxIOPS=5000 xio1 diff --git a/doc/source/config-reference/block-storage/drivers/zadara-volume-driver.rst b/doc/source/config-reference/block-storage/drivers/zadara-volume-driver.rst new file mode 100644 index 00000000000..8c134c034c5 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/zadara-volume-driver.rst @@ -0,0 +1,80 @@ +================================= +Zadara Storage VPSA volume driver +================================= + +Zadara Storage, Virtual Private Storage Array (VPSA) is the first software +defined, Enterprise-Storage-as-a-Service. It is an elastic and private block +and file storage system which, provides enterprise-grade data protection and +data management storage services. + +The ``ZadaraVPSAISCSIDriver`` volume driver allows the Zadara Storage VPSA +to be used as a volume back end storage in OpenStack deployments. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use Zadara Storage VPSA Volume Driver you will require: + +- Zadara Storage VPSA version 15.07 and above + +- iSCSI or iSER host interfaces + +Supported operations +~~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes +- Create, list, and delete volume snapshots +- Create a volume from a snapshot +- Copy an image to a volume +- Copy a volume to an image +- Clone a volume +- Extend a volume +- Migrate a volume with back end assistance + +Configuration +~~~~~~~~~~~~~ + +#. Create a VPSA pool(s) or make sure you have an existing pool(s) that will + be used for volume services. The VPSA pool(s) will be identified by its ID + (pool-xxxxxxxx). For further details, see the + `VPSA's user guide `_. + +#. Adjust the ``cinder.conf`` configuration file to define the volume driver + name along with a storage back end entry for each VPSA pool that will be + managed by the block storage service. + Each back end entry requires a unique section name, surrounded by square + brackets (or parentheses), followed by options in ``key=value`` format. + +.. note:: + + Restart cinder-volume service after modifying ``cinder.conf``. + + +Sample minimum back end configuration + +.. code-block:: ini + + [DEFAULT] + enabled_backends = vpsa + + [vpsa] + zadara_vpsa_host = 172.31.250.10 + zadara_vpsa_port = 80 + zadara_user = vpsauser + zadara_password = mysecretpassword + zadara_use_iser = false + zadara_vpsa_poolname = pool-00000001 + volume_driver = cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver + volume_backend_name = vpsa + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +This section contains the configuration options that are specific +to the Zadara Storage VPSA driver. + +.. include:: ../../tables/cinder-zadara.rst + +.. note:: + + By design, all volumes created within the VPSA are thin provisioned. diff --git a/doc/source/config-reference/block-storage/drivers/zfssa-iscsi-driver.rst b/doc/source/config-reference/block-storage/drivers/zfssa-iscsi-driver.rst new file mode 100644 index 00000000000..71877521b8d --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/zfssa-iscsi-driver.rst @@ -0,0 +1,265 @@ +========================================= +Oracle ZFS Storage Appliance iSCSI driver +========================================= + +Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to +protect data, speed tuning and troubleshooting, and deliver high +performance and high availability. Through the Oracle ZFSSA iSCSI +Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block +storage resource. The driver enables you to create iSCSI volumes that an +OpenStack Block Storage server can allocate to any virtual machine +running on a compute host. + +Requirements +~~~~~~~~~~~~ + +The Oracle ZFSSA iSCSI Driver, version ``1.0.0`` and later, supports +ZFSSA software release ``2013.1.2.0`` and later. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, detach, manage, and unmanage volumes. +- Create and delete snapshots. +- Create volume from snapshot. +- Extend a volume. +- Attach and detach volumes. +- Get volume stats. +- Clone volumes. +- Migrate a volume. +- Local cache of a bootable volume. + +Configuration +~~~~~~~~~~~~~ + +#. Enable RESTful service on the ZFSSA Storage Appliance. + +#. Create a new user on the appliance with the following authorizations: + + .. code-block:: bash + + scope=stmf - allow_configure=true + scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true + scope=schema - allow_modify=true + + You can create a role with authorizations as follows: + + .. code-block:: console + + zfssa:> configuration roles + zfssa:configuration roles> role OpenStackRole + zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack Cinder Driver" + zfssa:configuration roles OpenStackRole (uncommitted)> commit + zfssa:configuration roles> select OpenStackRole + zfssa:configuration roles OpenStackRole> authorizations create + zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=stmf + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> commit + zfssa:configuration roles OpenStackRole> authorizations create + zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeGeneralProps=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> commit + + You can create a user with a specific role as follows: + + .. code-block:: console + + zfssa:> configuration users + zfssa:configuration users> user cinder + zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver" + zfssa:configuration users cinder (uncommitted)> set initial_password=12345 + zfssa:configuration users cinder (uncommitted)> commit + zfssa:configuration users> select cinder set roles=OpenStackRole + + .. note:: + + You can also run this `workflow + `__ + to automate the above tasks. + Refer to `Oracle documentation + `__ + on how to download, view, and execute a workflow. + +#. Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is + not online, enable the service by using the BUI, CLI or REST API in the + appliance. + + .. code-block:: console + + zfssa:> configuration services iscsi + zfssa:configuration services iscsi> enable + zfssa:configuration services iscsi> show + Properties: + = online + ... + + Define the following required properties in the ``cinder.conf`` file: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver + san_ip = myhost + san_login = username + san_password = password + zfssa_pool = mypool + zfssa_project = myproject + zfssa_initiator_group = default + zfssa_target_portal = w.x.y.z:3260 + zfssa_target_interfaces = e1000g0 + + Optionally, you can define additional properties. + + Target interfaces can be seen as follows in the CLI: + + .. code-block:: console + + zfssa:> configuration net interfaces + zfssa:configuration net interfaces> show + Interfaces: + INTERFACE STATE CLASS LINKS ADDRS LABEL + e1000g0 up ip e1000g0 1.10.20.30/24 Untitled Interface + ... + + .. note:: + + Do not use management interfaces for ``zfssa_target_interfaces``. + +#. Configure the cluster: + + If a cluster is used as the cinder storage resource, the following + verifications are required on your Oracle ZFS Storage Appliance: + + - Verify that both the pool and the network interface are of type + singleton and are not locked to the current controller. This + approach ensures that the pool and the interface used for data + always belong to the active controller, regardless of the current + state of the cluster. + + - Verify that the management IP, data IP and storage pool belong to + the same head. + + .. note:: + + Most configuration settings, including service properties, users, roles, + and iSCSI initiator definitions are replicated on both heads + automatically. If the driver modifies any of these settings, they will be + modified automatically on both heads. + + .. note:: + + A short service interruption occurs during failback or takeover, + but once the process is complete, the ``cinder-volume`` service should be able + to access the pool through the data IP. + +ZFSSA assisted volume migration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ZFSSA iSCSI driver supports storage assisted volume migration +starting in the Liberty release. This feature uses remote replication +feature on the ZFSSA. Volumes can be migrated between two backends +configured not only to the same ZFSSA but also between two separate +ZFSSAs altogether. + +The following conditions must be met in order to use ZFSSA assisted +volume migration: + +- Both the source and target backends are configured to ZFSSAs. + +- Remote replication service on the source and target appliance is enabled. + +- The ZFSSA to which the target backend is configured should be configured as a + target in the remote replication service of the ZFSSA configured to the + source backend. The remote replication target needs to be configured even + when the source and the destination for volume migration are the same ZFSSA. + Define ``zfssa_replication_ip`` in the ``cinder.conf`` file of the source + backend as the IP address used to register the target ZFSSA in the remote + replication service of the source ZFSSA. + +- The name of the iSCSI target group(``zfssa_target_group``) on the source and + the destination ZFSSA is the same. + +- The volume is not attached and is in available state. + +If any of the above conditions are not met, the driver will proceed with +generic volume migration. + +The ZFSSA user on the source and target appliances will need to have +additional role authorizations for assisted volume migration to work. In +scope nas, set ``allow_rrtarget`` and ``allow_rrsource`` to ``true``. + +.. code-block:: console + + zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrtarget=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrsource=true + +ZFSSA local cache +~~~~~~~~~~~~~~~~~ + +The local cache feature enables ZFSSA drivers to serve the usage of bootable +volumes significantly better. With the feature, the first bootable volume +created from an image is cached, so that subsequent volumes can be created +directly from the cache, instead of having image data transferred over the +network multiple times. + +The following conditions must be met in order to use ZFSSA local cache feature: + +- A storage pool needs to be configured. + +- REST and iSCSI services need to be turned on. + +- On an OpenStack controller, ``cinder.conf`` needs to contain necessary + properties used to configure and set up the ZFSSA iSCSI driver, including the + following new properties: + + - ``zfssa_enable_local_cache``: (True/False) To enable/disable the feature. + + - ``zfssa_cache_project``: The ZFSSA project name where cache volumes are + stored. + +Every cache volume has two additional properties stored as ZFSSA custom +schema. It is important that the schema are not altered outside of Block +Storage when the driver is in use: + +- ``image_id``: stores the image id as in Image service. + +- ``updated_at``: stores the most current timestamp when the image is updated + in Image service. + +Supported extra specs +~~~~~~~~~~~~~~~~~~~~~ + +Extra specs provide the OpenStack storage admin the flexibility to create +volumes with different characteristics from the ones specified in the +``cinder.conf`` file. The admin will specify the volume properties as keys +at volume type creation. When a user requests a volume of this volume type, +the volume will be created with the properties specified as extra specs. + +The following extra specs scoped keys are supported by the driver: + +- ``zfssa:volblocksize`` + +- ``zfssa:sparse`` + +- ``zfssa:compression`` + +- ``zfssa:logbias`` + +Volume types can be created using the :command:`openstack volume type create` +command. +Extra spec keys can be added using :command:`openstack volume type set` +command. + +Driver options +~~~~~~~~~~~~~~ + +The Oracle ZFSSA iSCSI Driver supports these options: + +.. include:: ../../tables/cinder-zfssa-iscsi.rst diff --git a/doc/source/config-reference/block-storage/drivers/zfssa-nfs-driver.rst b/doc/source/config-reference/block-storage/drivers/zfssa-nfs-driver.rst new file mode 100644 index 00000000000..90d9264288f --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/zfssa-nfs-driver.rst @@ -0,0 +1,297 @@ +======================================= +Oracle ZFS Storage Appliance NFS driver +======================================= + +The Oracle ZFS Storage Appliance (ZFSSA) NFS driver enables the ZFSSA to +be used seamlessly as a block storage resource. The driver enables you +to to create volumes on a ZFS share that is NFS mounted. + +Requirements +~~~~~~~~~~~~ + +Oracle ZFS Storage Appliance Software version ``2013.1.2.0`` or later. + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, detach, manage, and unmanage volumes. + +- Create and delete snapshots. + +- Create a volume from a snapshot. + +- Extend a volume. + +- Copy an image to a volume. + +- Copy a volume to an image. + +- Clone a volume. + +- Volume migration. + +- Local cache of a bootable volume + +Appliance configuration +~~~~~~~~~~~~~~~~~~~~~~~ + +Appliance configuration using the command-line interface (CLI) is +described below. To access the CLI, ensure SSH remote access is enabled, +which is the default. You can also perform configuration using the +browser user interface (BUI) or the RESTful API. Please refer to the +`Oracle ZFS Storage Appliance +documentation `__ +for details on how to configure the Oracle ZFS Storage Appliance using +the BUI, CLI, and RESTful API. + +#. Log in to the Oracle ZFS Storage Appliance CLI and enable the REST + service. REST service needs to stay online for this driver to function. + + .. code-block:: console + + zfssa:>configuration services rest enable + +#. Create a new storage pool on the appliance if you do not want to use an + existing one. This storage pool is named ``'mypool'`` for the sake of this + documentation. + +#. Create a new project and share in the storage pool (``mypool``) if you do + not want to use existing ones. This driver will create a project and share + by the names specified in the ``cinder.conf`` file, if a project and share + by that name does not already exist in the storage pool (``mypool``). + The project and share are named ``NFSProject`` and ``nfs_share``' in the + sample ``cinder.conf`` file as entries below. + +#. To perform driver operations, create a role with the following + authorizations: + + .. code-block:: bash + + scope=svc - allow_administer=true, allow_restart=true, allow_configure=true + scope=nas - pool=pool_name, project=project_name, share=share_name, allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true, allow_changeAccessProps=true, allow_changeProtocolProps=true + + The following examples show how to create a role with authorizations. + + .. code-block:: console + + zfssa:> configuration roles + zfssa:configuration roles> role OpenStackRole + zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack NFS Cinder Driver" + zfssa:configuration roles OpenStackRole (uncommitted)> commit + zfssa:configuration roles> select OpenStackRole + zfssa:configuration roles OpenStackRole> authorizations create + zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=svc + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_administer=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_restart=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> commit + + + .. code-block:: console + + zfssa:> configuration roles OpenStackRole authorizations> set scope=nas + + The following properties need to be set when the scope of this role needs to + be limited to a pool (``mypool``), a project (``NFSProject``) and a share + (``nfs_share``) created in the steps above. This will prevent the user + assigned to this role from being used to modify other pools, projects and + shares. + + .. code-block:: console + + zfssa:configuration roles OpenStackRole auth (uncommitted)> set pool=mypool + zfssa:configuration roles OpenStackRole auth (uncommitted)> set project=NFSProject + zfssa:configuration roles OpenStackRole auth (uncommitted)> set share=nfs_share + +#. The following properties only need to be set when a share and project has + not been created following the steps above and wish to allow the driver to + create them for you. + + .. code-block:: console + + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true + + .. code-block:: console + + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeAccessProps=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeProtocolProps=true + zfssa:configuration roles OpenStackRole auth (uncommitted)> commit + +#. Create a new user or modify an existing one and assign the new role to + the user. + + The following example shows how to create a new user and assign the new + role to the user. + + .. code-block:: console + + zfssa:> configuration users + zfssa:configuration users> user cinder + zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver" + zfssa:configuration users cinder (uncommitted)> set initial_password=12345 + zfssa:configuration users cinder (uncommitted)> commit + zfssa:configuration users> select cinder set roles=OpenStackRole + +#. Ensure that NFS and HTTP services on the appliance are online. Note the + HTTPS port number for later entry in the cinder service configuration file + (``cinder.conf``). This driver uses WebDAV over HTTPS to create snapshots + and clones of volumes, and therefore needs to have the HTTP service online. + + The following example illustrates enabling the services and showing their + properties. + + .. code-block:: console + + zfssa:> configuration services nfs + zfssa:configuration services nfs> enable + zfssa:configuration services nfs> show + Properties: + = online + ... + + .. code-block:: console + + zfssa:configuration services http> enable + zfssa:configuration services http> show + Properties: + = online + require_login = true + protocols = http/https + listen_port = 80 + https_port = 443 + + .. note:: + + You can also run this `workflow + `__ + to automate the above tasks. + Refer to `Oracle documentation + `__ + on how to download, view, and execute a workflow. + +#. Create a network interface to be used exclusively for data. An existing + network interface may also be used. The following example illustrates how to + make a network interface for data traffic flow only. + + .. note:: + + For better performance and reliability, it is recommended to configure a + separate subnet exclusively for data traffic in your cloud environment. + + .. code-block:: console + + zfssa:> configuration net interfaces + zfssa:configuration net interfaces> select igbx + zfssa:configuration net interfaces igbx> set admin=false + zfssa:configuration net interfaces igbx> commit + +#. For clustered controller systems, the following verification is required in + addition to the above steps. Skip this step if a standalone system is used. + + .. code-block:: console + + zfssa:> configuration cluster resources list + + Verify that both the newly created pool and the network interface are of + type ``singleton`` and are not locked to the current controller. This + approach ensures that the pool and the interface used for data always belong + to the active controller, regardless of the current state of the cluster. + Verify that both the network interface used for management and data, and the + storage pool belong to the same head. + + .. note:: + + There will be a short service interruption during failback/takeover, but + once the process is complete, the driver should be able to access the + ZFSSA for data as well as for management. + +Cinder service configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. Define the following required properties in the ``cinder.conf`` + configuration file: + + .. code-block:: ini + + volume_driver = cinder.volume.drivers.zfssa.zfssanfs.ZFSSANFSDriver + san_ip = myhost + san_login = username + san_password = password + zfssa_data_ip = mydata + zfssa_nfs_pool = mypool + + .. note:: + + Management interface ``san_ip`` can be used instead of ``zfssa_data_ip``, + but it is not recommended. + +#. You can also define the following additional properties in the + ``cinder.conf`` configuration file: + + .. code:: ini + + zfssa_nfs_project = NFSProject + zfssa_nfs_share = nfs_share + zfssa_nfs_mount_options = + zfssa_nfs_share_compression = off + zfssa_nfs_share_logbias = latency + zfssa_https_port = 443 + + .. note:: + + The driver does not use the file specified in the ``nfs_shares_config`` + option. + +ZFSSA local cache +~~~~~~~~~~~~~~~~~ + +The local cache feature enables ZFSSA drivers to serve the usage of +bootable volumes significantly better. With the feature, the first +bootable volume created from an image is cached, so that subsequent +volumes can be created directly from the cache, instead of having image +data transferred over the network multiple times. + +The following conditions must be met in order to use ZFSSA local cache +feature: + +- A storage pool needs to be configured. + +- REST and NFS services need to be turned on. + +- On an OpenStack controller, ``cinder.conf`` needs to contain + necessary properties used to configure and set up the ZFSSA NFS + driver, including the following new properties: + + zfssa_enable_local_cache + (True/False) To enable/disable the feature. + + zfssa_cache_directory + The directory name inside zfssa_nfs_share where cache volumes + are stored. + +Every cache volume has two additional properties stored as WebDAV +properties. It is important that they are not altered outside of Block +Storage when the driver is in use: + +image_id + stores the image id as in Image service. + +updated_at + stores the most current timestamp when the image is + updated in Image service. + +Driver options +~~~~~~~~~~~~~~ + +The Oracle ZFS Storage Appliance NFS driver supports these options: + +.. include:: ../../tables/cinder-zfssa-nfs.rst + +This driver shares additional NFS configuration options with the generic +NFS driver. For a description of these, see :ref:`cinder-storage_nfs`. diff --git a/doc/source/config-reference/block-storage/drivers/zte-storage-driver.rst b/doc/source/config-reference/block-storage/drivers/zte-storage-driver.rst new file mode 100644 index 00000000000..898122a5f97 --- /dev/null +++ b/doc/source/config-reference/block-storage/drivers/zte-storage-driver.rst @@ -0,0 +1,158 @@ +================== +ZTE volume drivers +================== + +The ZTE volume drivers allow ZTE KS3200 or KU5200 arrays +to be used for Block Storage in OpenStack deployments. + +System requirements +~~~~~~~~~~~~~~~~~~~ + +To use the ZTE drivers, the following prerequisites: + +- ZTE KS3200 or KU5200 array with: + + - iSCSI or FC interfaces + - 30B2 firmware or later + +- Network connectivity between the OpenStack host and the array + management interfaces + +- HTTPS or HTTP must be enabled on the array + +Supported operations +~~~~~~~~~~~~~~~~~~~~ + +- Create, delete, attach, and detach volumes. +- Create, list, and delete volume snapshots. +- Create a volume from a snapshot. +- Copy an image to a volume. +- Copy a volume to an image. +- Clone a volume. +- Extend a volume. +- Migrate a volume with back-end assistance. +- Retype a volume. +- Manage and unmanage a volume. + +Configuring the array +~~~~~~~~~~~~~~~~~~~~~ + +#. Verify that the array can be managed using an HTTPS connection. HTTP can + also be used if ``zte_api_protocol=http`` is placed into the + appropriate sections of the ``cinder.conf`` file. + + Confirm that virtual pools A and B are present if you plan to use + virtual pools for OpenStack storage. + +#. Edit the ``cinder.conf`` file to define a storage back-end entry for + each storage pool on the array that will be managed by OpenStack. Each + entry consists of a unique section name, surrounded by square brackets, + followed by options specified in ``key=value`` format. + + - The ``zte_backend_name`` value specifies the name of the storage + pool on the array. + + - The ``volume_backend_name`` option value can be a unique value, if + you wish to be able to assign volumes to a specific storage pool on + the array, or a name that is shared among multiple storage pools to + let the volume scheduler choose where new volumes are allocated. + + - The rest of the options will be repeated for each storage pool in a + given array: the appropriate cinder driver name, IP address or + host name of the array management interface; the username and password + of an array user account with ``manage`` privileges; and the iSCSI IP + addresses for the array if using the iSCSI transport protocol. + + In the examples below, two back ends are defined, one for pool A and one + for pool B, and a common ``volume_backend_name``. Use this for a + single volume type definition can be used to allocate volumes from both + pools. + + **Example: iSCSI back-end entries** + + .. code-block:: ini + + [pool-a] + zte_backend_name = A + volume_backend_name = zte-array + volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + zte_iscsi_ips = 10.2.3.4,10.2.3.5 + + [pool-b] + zte_backend_name = B + volume_backend_name = zte-array + volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + zte_iscsi_ips = 10.2.3.4,10.2.3.5 + + **Example: Fibre Channel back end entries** + + .. code-block:: ini + + [pool-a] + zte_backend_name = A + volume_backend_name = zte-array + volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + + [pool-b] + zte_backend_name = B + volume_backend_name = zte-array + volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver + san_ip = 10.1.2.3 + san_login = manage + san_password = !manage + +#. If HTTPS is not enabled in the array, include + ``zte_api_protocol = http`` in each of the back-end definitions. + +#. If HTTPS is enabled, you can enable certificate verification with the + option ``zte_verify_certificate=True``. You may also use the + ``zte_verify_certificate_path`` parameter to specify the path to a + ``CA_BUNDLE`` file containing CAs other than those in the default list. + +#. Modify the ``[DEFAULT]`` section of the ``cinder.conf`` file to add an + ``enabled_backends`` parameter specifying the back-end entries you added, + and a ``default_volume_type`` parameter specifying the name of a volume + type that you will create in the next step. + + **Example: [DEFAULT] section changes** + + .. code-block:: ini + + [DEFAULT] + # ... + enabled_backends = pool-a,pool-b + default_volume_type = zte + +#. Create a new volume type for each distinct ``volume_backend_name`` value + that you added to the ``cinder.conf`` file. The example below + assumes that the same ``volume_backend_name=zte-array`` + option was specified in all of the + entries, and specifies that the volume type ``zte`` can be used to + allocate volumes from any of them. + + **Example: Creating a volume type** + + .. code-block:: console + + $ openstack volume type create zte + $ openstack volume type set --property volume_backend_name=zte-array zte + +#. After modifying the ``cinder.conf`` file, + restart the ``cinder-volume`` service. + +Driver-specific options +~~~~~~~~~~~~~~~~~~~~~~~ + +The following table contains the configuration options that are specific +to the ZTE drivers. + +.. include:: ../../tables/cinder-zte.rst diff --git a/doc/source/config-reference/block-storage/fc-zoning.rst b/doc/source/config-reference/block-storage/fc-zoning.rst new file mode 100644 index 00000000000..28085e3675f --- /dev/null +++ b/doc/source/config-reference/block-storage/fc-zoning.rst @@ -0,0 +1,126 @@ + +.. _fc_zone_manager: + +========================== +Fibre Channel Zone Manager +========================== + +The Fibre Channel Zone Manager allows FC SAN Zone/Access control +management in conjunction with Fibre Channel block storage. The +configuration of Fibre Channel Zone Manager and various zone drivers are +described in this section. + +Configure Block Storage to use Fibre Channel Zone Manager +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If Block Storage is configured to use a Fibre Channel volume driver that +supports Zone Manager, update ``cinder.conf`` to add the following +configuration options to enable Fibre Channel Zone Manager. + +Make the following changes in the ``/etc/cinder/cinder.conf`` file. + +.. include:: ../tables/cinder-zoning.rst + +To use different Fibre Channel Zone Drivers, use the parameters +described in this section. + +.. note:: + + When multi backend configuration is used, provide the + ``zoning_mode`` configuration option as part of the volume driver + configuration where ``volume_driver`` option is specified. + +.. note:: + + Default value of ``zoning_mode`` is ``None`` and this needs to be + changed to ``fabric`` to allow fabric zoning. + +.. note:: + + ``zoning_policy`` can be configured as ``initiator-target`` or + ``initiator`` + +Brocade Fibre Channel Zone Driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Brocade Fibre Channel Zone Driver performs zoning operations +through HTTP, HTTPS, or SSH. + +Set the following options in the ``cinder.conf`` configuration file. + +.. include:: ../tables/cinder-zoning_manager_brcd.rst + +Configure SAN fabric parameters in the form of fabric groups as +described in the example below: + +.. include:: ../tables/cinder-zoning_fabric_brcd.rst + +.. note:: + + Define a fabric group for each fabric using the fabric names used in + ``fc_fabric_names`` configuration option as group name. + +.. note:: + + To define a fabric group for a switch which has Virtual Fabrics + enabled, include the ``fc_virtual_fabric_id`` configuration option + and ``fc_southbound_protocol`` configuration option set to ``HTTP`` + or ``HTTPS`` in the fabric group. Zoning on VF enabled fabric using + ``SSH`` southbound protocol is not supported. + +System requirements +------------------- + +Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or +higher. + +As a best practice for zone management, use a user account with +``zoneadmin`` role. Users with ``admin`` role (including the default +``admin`` user account) are limited to a maximum of two concurrent SSH +sessions. + +For information about how to manage Brocade Fibre Channel switches, see +the Brocade Fabric OS user documentation. + +Cisco Fibre Channel Zone Driver +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cisco Fibre Channel Zone Driver automates the zoning operations through +SSH. Configure Cisco Zone Driver, Cisco Southbound connector, FC SAN +lookup service and Fabric name. + +Set the following options in the ``cinder.conf`` configuration file. + +.. code-block:: ini + + [fc-zone-manager] + zone_driver = cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver.CiscoFCZoneDriver + fc_san_lookup_service = cinder.zonemanager.drivers.cisco.cisco_fc_san_lookup_service.CiscoFCSanLookupService + fc_fabric_names = CISCO_FABRIC_EXAMPLE + cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI + +.. include:: ../tables/cinder-zoning_manager_cisco.rst + +Configure SAN fabric parameters in the form of fabric groups as +described in the example below: + +.. include:: ../tables/cinder-zoning_fabric_cisco.rst + +.. note:: + + Define a fabric group for each fabric using the fabric names used in + ``fc_fabric_names`` configuration option as group name. + + The Cisco Fibre Channel Zone Driver supports basic and enhanced + zoning modes.The zoning VSAN must exist with an active zone set name + which is same as the ``fc_fabric_names`` option. + +System requirements +------------------- + +Cisco MDS 9000 Family Switches. + +Cisco MDS NX-OS Release 6.2(9) or later. + +For information about how to manage Cisco Fibre Channel switches, see +the Cisco MDS 9000 user documentation. diff --git a/doc/source/config-reference/block-storage/logs.rst b/doc/source/config-reference/block-storage/logs.rst new file mode 100644 index 00000000000..921ede3464b --- /dev/null +++ b/doc/source/config-reference/block-storage/logs.rst @@ -0,0 +1,28 @@ +=============================== +Log files used by Block Storage +=============================== + +The corresponding log file of each Block Storage service is stored in +the ``/var/log/cinder/`` directory of the host on which each service +runs. + +.. list-table:: **Log files used by Block Storage services** + :header-rows: 1 + :widths: 10 20 10 + + * - Log file + - Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise) + - Service/interface (for Ubuntu and Debian) + * - api.log + - openstack-cinder-api + - cinder-api + * - cinder-manage.log + - cinder-manage + - cinder-manage + * - scheduler.log + - openstack-cinder-scheduler + - cinder-scheduler + * - volume.log + - openstack-cinder-volume + - cinder-volume + diff --git a/doc/source/config-reference/block-storage/nested-quota.rst b/doc/source/config-reference/block-storage/nested-quota.rst new file mode 100644 index 00000000000..9fcdabdaa4d --- /dev/null +++ b/doc/source/config-reference/block-storage/nested-quota.rst @@ -0,0 +1,165 @@ +============= +Nested quotas +============= + +Nested quota is a change in how OpenStack services (such as Block Storage and +Compute) handle their quota resources by being hierarchy-aware. The main +reason for this change is to fully appreciate the hierarchical multi-tenancy +concept, which was introduced in keystone in the Kilo release. + +Once you have a project hierarchy created in keystone, nested quotas let you +define how much of a project's quota you want to give to its subprojects. In +that way, hierarchical projects can have hierarchical quotas (also known as +nested quotas). + +Projects and subprojects have similar behaviors, but they differ from each +other when it comes to default quota values. The default quota value for +resources in a subproject is 0, so that when a subproject is created it will +not consume all of its parent's quota. + +In order to keep track of how much of each quota was allocated to a +subproject, a column ``allocated`` was added to the quotas table. This column +is updated after every delete and update quota operation. + +This example shows you how to use nested quotas. + +.. note:: + + Assume that you have created a project hierarchy in keystone, such as + follows: + + .. code-block:: console + + +-----------+ + | | + | A | + | / \ | + | B C | + | / | + | D | + +-----------+ + +Getting default quotas +~~~~~~~~~~~~~~~~~~~~~~ + +#. Get the quota for root projects. + + Use the :command:`openstack quota show` command and specify: + + - The ``PROJECT`` of the relevant project. In this case, the name of + project A. + + .. code-block:: console + + $ openstack quota show PROJECT + +----------------------+-------+ + | Field | Value | + +----------------------+-------+ + | ... | ... | + | backup_gigabytes | 1000 | + | backups | 10 | + | gigabytes | 1000 | + | per_volume_gigabytes | -1 | + | snapshots | 10 | + | volumes | 10 | + +----------------------+-------+ + + .. note:: + + This command returns the default values for resources. + This is because the quotas for this project were not explicitly set. + +#. Get the quota for subprojects. + + In this case, use the same :command:`openstack quota show` command and + specify: + + - The ``PROJECT`` of the relevant project. In this case the name of + project B, which is a child of A. + + .. code-block:: console + + $ openstack quota show PROJECT + +----------------------+-------+ + | Field | Value | + +----------------------+-------+ + | ... | ... | + | backup_gigabytes | 0 | + | backups | 0 | + | gigabytes | 0 | + | per_volume_gigabytes | 0 | + | snapshots | 0 | + | volumes | 0 | + +----------------------+-------+ + + .. note:: + + In this case, 0 was the value returned as the quota for all the + resources. This is because project B is a subproject of A, thus, + the default quota value is 0, so that it will not consume all the + quota of its parent project. + +Setting the quotas for subprojects +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Now that the projects were created, assume that the admin of project B wants +to use it. First of all, you need to set the quota limit of the project, +because as a subproject it does not have quotas allocated by default. + +In this example, when all of the parent project is allocated to its +subprojects the user will not be able to create more resources in the parent +project. + +#. Update the quota of B. + + Use the :command:`openstack quota set` command and specify: + + - The ``PROJECT`` of the relevant project. + In this case the name of project B. + + - The ``--volumes`` option, followed by the number to which you wish to + increase the volumes quota. + + .. code-block:: console + + $ openstack quota set --volumes 10 PROJECT + +----------------------+-------+ + | Property | Value | + +----------------------+-------+ + | ... | ... | + | backup_gigabytes | 0 | + | backups | 0 | + | gigabytes | 0 | + | per_volume_gigabytes | 0 | + | snapshots | 0 | + | volumes | 10 | + +----------------------+-------+ + + .. note:: + + The volumes resource quota is updated. + +#. Try to create a volume in project A. + + Use the :command:`openstack volume create` command and specify: + + - The ``SIZE`` of the volume that will be created; + + - The ``NAME`` of the volume. + + .. code-block:: console + + $ openstack volume create --size SIZE NAME + VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded for quota 'volumes'. (HTTP 413) (Request-ID: req-f6f7cc89-998e-4a82-803d-c73c8ee2016c) + + .. note:: + + As the entirety of project A's volumes quota has been assigned to + project B, it is treated as if all of the quota has been used. This + is true even when project B has not created any volumes. + +See `cinder nested quota spec +`_ +and `hierarchical multi-tenancy spec +`_ +for details. diff --git a/doc/source/config-reference/block-storage/samples/api-paste.ini.rst b/doc/source/config-reference/block-storage/samples/api-paste.ini.rst new file mode 100644 index 00000000000..77d20479b05 --- /dev/null +++ b/doc/source/config-reference/block-storage/samples/api-paste.ini.rst @@ -0,0 +1,10 @@ +============= +api-paste.ini +============= + +Use the ``api-paste.ini`` file to configure the Block Storage API +service. + +.. remote-code-block:: none + + https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/api-paste.ini?h=stable/ocata diff --git a/doc/source/config-reference/block-storage/samples/cinder.conf.rst b/doc/source/config-reference/block-storage/samples/cinder.conf.rst new file mode 100644 index 00000000000..6791ab2ebe1 --- /dev/null +++ b/doc/source/config-reference/block-storage/samples/cinder.conf.rst @@ -0,0 +1,15 @@ +=========== +cinder.conf +=========== + +The ``cinder.conf`` file is installed in ``/etc/cinder`` by default. +When you manually install the Block Storage service, the options in the +``cinder.conf`` file are set to default values. + +The ``cinder.conf`` file contains most of the options needed to configure +the Block Storage service. You can generate the latest configuration file +by using the tox provided by the Block Storage service. Here is a sample +configuration file: + +.. literalinclude:: ../../samples/cinder.conf.sample + :language: ini diff --git a/doc/source/config-reference/block-storage/samples/index.rst b/doc/source/config-reference/block-storage/samples/index.rst new file mode 100644 index 00000000000..0b759114f2d --- /dev/null +++ b/doc/source/config-reference/block-storage/samples/index.rst @@ -0,0 +1,15 @@ +.. _block-storage-sample-configuration-file: + +================================================ +Block Storage service sample configuration files +================================================ + +All the files in this section can be found in ``/etc/cinder``. + +.. toctree:: + :maxdepth: 2 + + cinder.conf.rst + api-paste.ini.rst + policy.json.rst + rootwrap.conf.rst diff --git a/doc/source/config-reference/block-storage/samples/policy.json.rst b/doc/source/config-reference/block-storage/samples/policy.json.rst new file mode 100644 index 00000000000..bef8f0a8c98 --- /dev/null +++ b/doc/source/config-reference/block-storage/samples/policy.json.rst @@ -0,0 +1,10 @@ +=========== +policy.json +=========== + +The ``policy.json`` file defines additional access controls that apply +to the Block Storage service. + +.. remote-code-block:: none + + https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/policy.json?h=stable/ocata diff --git a/doc/source/config-reference/block-storage/samples/rootwrap.conf.rst b/doc/source/config-reference/block-storage/samples/rootwrap.conf.rst new file mode 100644 index 00000000000..e819693cedb --- /dev/null +++ b/doc/source/config-reference/block-storage/samples/rootwrap.conf.rst @@ -0,0 +1,11 @@ +============= +rootwrap.conf +============= + +The ``rootwrap.conf`` file defines configuration values used by the +``rootwrap`` script when the Block Storage service must escalate its +privileges to those of the root user. + +.. remote-code-block:: ini + + https://git.openstack.org/cgit/openstack/cinder/plain/etc/cinder/rootwrap.conf?h=stable/ocata diff --git a/doc/source/config-reference/block-storage/schedulers.rst b/doc/source/config-reference/block-storage/schedulers.rst new file mode 100644 index 00000000000..31f11280936 --- /dev/null +++ b/doc/source/config-reference/block-storage/schedulers.rst @@ -0,0 +1,11 @@ +======================== +Block Storage schedulers +======================== + +Block Storage service uses the ``cinder-scheduler`` service +to determine how to dispatch block storage requests. + +For more information, see `Cinder Scheduler Filters +`_ +and `Cinder Scheduler Weights +`_. diff --git a/doc/source/config-reference/block-storage/volume-drivers.rst b/doc/source/config-reference/block-storage/volume-drivers.rst new file mode 100644 index 00000000000..1a787c1a792 --- /dev/null +++ b/doc/source/config-reference/block-storage/volume-drivers.rst @@ -0,0 +1,78 @@ +============== +Volume drivers +============== + +.. sort by the drivers by open source software +.. and the drivers for proprietary components + +.. toctree:: + :maxdepth: 1 + + drivers/ceph-rbd-volume-driver.rst + drivers/lvm-volume-driver.rst + drivers/nfs-volume-driver.rst + drivers/sheepdog-driver.rst + drivers/smbfs-volume-driver.rst + drivers/blockbridge-eps-driver.rst + drivers/cloudbyte-driver.rst + drivers/coho-data-driver.rst + drivers/coprhd-driver.rst + drivers/datera-volume-driver.rst + drivers/dell-emc-scaleio-driver.rst + drivers/dell-emc-unity-driver.rst + drivers/dell-equallogic-driver.rst + drivers/dell-storagecenter-driver.rst + drivers/dothill-driver.rst + drivers/emc-vmax-driver.rst + drivers/emc-vnx-driver.rst + drivers/emc-xtremio-driver.rst + drivers/falconstor-fss-driver.rst + drivers/fujitsu-eternus-dx-driver.rst + drivers/hds-hnas-driver.rst + drivers/hitachi-storage-volume-driver.rst + drivers/hpe-3par-driver.rst + drivers/hpe-lefthand-driver.rst + drivers/hp-msa-driver.rst + drivers/huawei-storage-driver.rst + drivers/ibm-gpfs-volume-driver.rst + drivers/ibm-storwize-svc-driver.rst + drivers/ibm-storage-volume-driver.rst + drivers/ibm-flashsystem-volume-driver.rst + drivers/infinidat-volume-driver.rst + drivers/infortrend-volume-driver.rst + drivers/itri-disco-driver.rst + drivers/kaminario-driver.rst + drivers/lenovo-driver.rst + drivers/nec-storage-m-series-driver.rst + drivers/netapp-volume-driver.rst + drivers/nimble-volume-driver.rst + drivers/nexentastor4-driver.rst + drivers/nexentastor5-driver.rst + drivers/nexentaedge-driver.rst + drivers/prophetstor-dpl-driver.rst + drivers/pure-storage-driver.rst + drivers/quobyte-driver.rst + drivers/scality-sofs-driver.rst + drivers/solidfire-volume-driver.rst + drivers/synology-dsm-driver.rst + drivers/tintri-volume-driver.rst + drivers/violin-v7000-driver.rst + drivers/vzstorage-driver.rst + drivers/vmware-vmdk-driver.rst + drivers/windows-iscsi-volume-driver.rst + drivers/xio-volume-driver.rst + drivers/zadara-volume-driver.rst + drivers/zfssa-iscsi-driver.rst + drivers/zfssa-nfs-driver.rst + drivers/zte-storage-driver.rst + +To use different volume drivers for the cinder-volume service, use the +parameters described in these sections. + +The volume drivers are included in the `Block Storage repository +`_. To set a volume +driver, use the ``volume_driver`` flag. The default is: + +.. code-block:: ini + + volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver diff --git a/doc/source/config-reference/block-storage/volume-encryption.rst b/doc/source/config-reference/block-storage/volume-encryption.rst new file mode 100644 index 00000000000..2eef5df7abc --- /dev/null +++ b/doc/source/config-reference/block-storage/volume-encryption.rst @@ -0,0 +1,213 @@ +============================================== +Volume encryption supported by the key manager +============================================== + +We recommend the Key management service (barbican) for storing +encryption keys used by the OpenStack volume encryption feature. It can +be enabled by updating ``cinder.conf`` and ``nova.conf``. + +Initial configuration +~~~~~~~~~~~~~~~~~~~~~ + +Configuration changes need to be made to any nodes running the +``cinder-api`` or ``nova-compute`` server. + +Steps to update ``cinder-api`` servers: + +#. Edit the ``/etc/cinder/cinder.conf`` file to use Key management service + as follows: + + * Look for the ``[key_manager]`` section. + + * Enter a new line directly below ``[key_manager]`` with the following: + + .. code-block:: ini + + api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager + +#. Restart ``cinder-api``. + +Update ``nova-compute`` servers: + +#. Ensure the ``cryptsetup`` utility is installed, and install + the ``python-barbicanclient`` Python package. + +#. Set up the Key Manager service by editing ``/etc/nova/nova.conf``: + + .. code-block:: ini + + [key_manager] + api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager + + .. note:: + + Use a '#' prefix to comment out the line in this section that + begins with 'fixed_key'. + +#. Restart ``nova-compute``. + + +Key management access control +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Special privileges can be assigned on behalf of an end user to allow +them to manage their own encryption keys, which are required when +creating the encrypted volumes. The Barbican `Default Policy +`_ +for access control specifies that only users with an ``admin`` or +``creator`` role can create keys. The policy is very flexible and +can be modified. + +To assign the ``creator`` role, the admin must know the user ID, +project ID, and creator role ID. See `Assign a role +`_ +for more information. An admin can list existing roles and associated +IDs using the ``openstack role list`` command. If the creator +role does not exist, the admin can `create the role +`_. + + +Create an encrypted volume type +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Block Storage volume type assignment provides scheduling to a specific +back-end, and can be used to specify actionable information for a +back-end storage device. + +This example creates a volume type called LUKS and provides +configuration information for the storage system to encrypt or decrypt +the volume. + +#. Source your admin credentials: + + .. code-block:: console + + $ . admin-openrc.sh + +#. Create the volume type, marking the volume type as encrypted and providing + the necessary details. Use ``--encryption-control-location`` to specify + where encryption is performed: ``front-end`` (default) or ``back-end``. + + .. code-block:: console + + $ openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor \ + --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LUKS + + +-------------+----------------------------------------------------------------+ + | Field | Value | + +-------------+----------------------------------------------------------------+ + | description | None | + | encryption | cipher='aes-xts-plain64', control_location='front-end', | + | | encryption_id='8584c43f-1666-43d1-a348-45cfcef72898', | + | | key_size='256', | + | | provider='nova.volume.encryptors.luks.LuksEncryptor' | + | id | b9a8cff5-2f60-40d1-8562-d33f3bf18312 | + | is_public | True | + | name | LUKS | + +-------------+----------------------------------------------------------------+ + +The OpenStack dashboard (horizon) supports creating the encrypted +volume type as of the Kilo release. For instructions, see +`Create an encrypted volume type +`_. + +Create an encrypted volume +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the OpenStack dashboard (horizon), or :command:`openstack volume +create` command to create volumes just as you normally would. For an +encrypted volume, pass the ``--type LUKS`` flag, which specifies that the +volume type will be ``LUKS`` (Linux Unified Key Setup). If that argument is +left out, the default volume type, ``unencrypted``, is used. + +#. Source your admin credentials: + + .. code-block:: console + + $ . admin-openrc.sh + +#. Create an unencrypted 1 GB test volume: + + .. code-block:: console + + + $ openstack volume create --size 1 'unencrypted volume' + + +#. Create an encrypted 1 GB test volume: + + .. code-block:: console + + $ openstack volume create --size 1 --type LUKS 'encrypted volume' + +Notice the encrypted parameter; it will show ``True`` or ``False``. +The option ``volume_type`` is also shown for easy review. + +Non-admin users need the ``creator`` role to store secrets in Barbican +and to create encrypted volumes. As an administrator, you can give a user +the creator role in the following way: + +.. code-block:: console + + $ openstack role add --project PROJECT --user USER creator + +For details, see the +`Barbican Access Control page +`_. + +.. note:: + + Due to the issue that some of the volume drivers do not set + ``encrypted`` flag, attaching of encrypted volumes to a virtual + guest will fail, because OpenStack Compute service will not run + encryption providers. + +Testing volume encryption +~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is a simple test scenario to help validate your encryption. It +assumes an LVM based Block Storage server. + +Perform these steps after completing the volume encryption setup and +creating the volume-type for LUKS as described in the preceding +sections. + +#. Create a VM: + + .. code-block:: console + + $ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM + +#. Create two volumes, one encrypted and one not encrypted then attach them + to your VM: + + .. code-block:: console + + $ openstack volume create --size 1 'unencrypted volume' + $ openstack volume create --size 1 --type LUKS 'encrypted volume' + $ openstack volume list + $ openstack server add volume --device /dev/vdb TESTVM 'unencrypted volume' + $ openstack server add volume --device /dev/vdc TESTVM 'encrypted volume' + +#. On the VM, send some text to the newly attached volumes and synchronize + them: + + .. code-block:: console + + # echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb + # echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc + # sync && sleep 2 + # sync && sleep 2 + +#. On the system hosting cinder volume services, synchronize to flush the + I/O cache then test to see if your strings can be found: + + .. code-block:: console + + # sync && sleep 2 + # sync && sleep 2 + # strings /dev/stack-volumes/volume-* | grep "Hello" + Hello, world (unencrypted /dev/vdb) + +In the above example you see that the search returns the string +written to the unencrypted volume, but not the encrypted one. diff --git a/doc/source/config-reference/figures/bb-cinder-fig1.png b/doc/source/config-reference/figures/bb-cinder-fig1.png new file mode 100644 index 00000000000..022d3652a17 Binary files /dev/null and b/doc/source/config-reference/figures/bb-cinder-fig1.png differ diff --git a/doc/source/config-reference/figures/ceph-architecture.png b/doc/source/config-reference/figures/ceph-architecture.png new file mode 100644 index 00000000000..ec408118507 Binary files /dev/null and b/doc/source/config-reference/figures/ceph-architecture.png differ diff --git a/doc/source/config-reference/figures/emc-enabler.png b/doc/source/config-reference/figures/emc-enabler.png new file mode 100644 index 00000000000..b969b817141 Binary files /dev/null and b/doc/source/config-reference/figures/emc-enabler.png differ diff --git a/doc/source/config-reference/figures/ibm-storage-nova-concept.png b/doc/source/config-reference/figures/ibm-storage-nova-concept.png new file mode 100644 index 00000000000..75e336d488d Binary files /dev/null and b/doc/source/config-reference/figures/ibm-storage-nova-concept.png differ diff --git a/doc/source/config-reference/tables/cinder-api.rst b/doc/source/config-reference/tables/cinder-api.rst new file mode 100644 index 00000000000..a05c4194545 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-api.rst @@ -0,0 +1,90 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-api: + +.. list-table:: Description of API configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``api_rate_limit`` = ``True`` + - (Boolean) Enables or disables rate limit of the API. + * - ``az_cache_duration`` = ``3600`` + - (Integer) Cache volume availability zones in memory for the provided duration in seconds + * - ``backend_host`` = ``None`` + - (String) Backend override of host value. + * - ``default_timeout`` = ``31536000`` + - (Integer) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long. + * - ``enable_v1_api`` = ``False`` + - (Boolean) DEPRECATED: Deploy v1 of the Cinder API. + * - ``enable_v2_api`` = ``True`` + - (Boolean) DEPRECATED: Deploy v2 of the Cinder API. + * - ``enable_v3_api`` = ``True`` + - (Boolean) Deploy v3 of the Cinder API. + * - ``extra_capabilities`` = ``{}`` + - (String) User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties. + * - ``ignore_pool_full_threshold`` = ``False`` + - (Boolean) Force LUN creation even if the full threshold of pool is reached. By default, the value is False. + * - ``management_ips`` = + - (String) List of Management IP addresses (separated by commas) + * - ``message_ttl`` = ``2592000`` + - (Integer) message minimum life in seconds. + * - ``osapi_max_limit`` = ``1000`` + - (Integer) The maximum number of items that a collection resource returns in a single response + * - ``osapi_volume_base_URL`` = ``None`` + - (String) Base URL that will be presented to users in links to the OpenStack Volume API + * - ``osapi_volume_ext_list`` = + - (List) Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions + * - ``osapi_volume_extension`` = ``['cinder.api.contrib.standard_extensions']`` + - (Multi-valued) osapi volume extension to load + * - ``osapi_volume_listen`` = ``0.0.0.0`` + - (String) IP address on which OpenStack Volume API listens + * - ``osapi_volume_listen_port`` = ``8776`` + - (Port number) Port on which OpenStack Volume API listens + * - ``osapi_volume_use_ssl`` = ``False`` + - (Boolean) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. + * - ``osapi_volume_workers`` = ``None`` + - (Integer) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available. + * - ``per_volume_size_limit`` = ``-1`` + - (Integer) Max size allowed per volume, in gigabytes + * - ``public_endpoint`` = ``None`` + - (String) Public url to use for versions endpoint. The default is None, which will use the request's host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy's URL. + * - ``query_volume_filters`` = ``name, status, metadata, availability_zone, bootable, group_id`` + - (List) Volume filter options which non-admin user could use to query volumes. Default values are: ['name', 'status', 'metadata', 'availability_zone' ,'bootable', 'group_id'] + * - ``transfer_api_class`` = ``cinder.transfer.api.API`` + - (String) The full class name of the volume transfer API class + * - ``volume_api_class`` = ``cinder.volume.api.API`` + - (String) The full class name of the volume API class to use + * - ``volume_name_prefix`` = ``openstack-`` + - (String) Prefix before volume name to differentiate DISCO volume created through openstack and the other ones + * - ``volume_name_template`` = ``volume-%s`` + - (String) Template string to be used to generate volume names + * - ``volume_number_multiplier`` = ``-1.0`` + - (Floating point) Multiplier used for weighing volume number. Negative numbers mean to spread vs stack. + * - ``volume_transfer_key_length`` = ``16`` + - (Integer) The number of characters in the autogenerated auth key. + * - ``volume_transfer_salt_length`` = ``8`` + - (Integer) The number of characters in the salt. + * - **[oslo_middleware]** + - + * - ``enable_proxy_headers_parsing`` = ``False`` + - (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. + * - ``max_request_body_size`` = ``114688`` + - (Integer) The maximum body size for each request, in bytes. + * - ``secure_proxy_ssl_header`` = ``X-Forwarded-Proto`` + - (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. + * - **[oslo_versionedobjects]** + - + * - ``fatal_exception_format_errors`` = ``False`` + - (Boolean) Make exception message format errors fatal diff --git a/doc/source/config-reference/tables/cinder-auth.rst b/doc/source/config-reference/tables/cinder-auth.rst new file mode 100644 index 00000000000..d5d6f5d9f69 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-auth.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-auth: + +.. list-table:: Description of authorization configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``auth_strategy`` = ``keystone`` + - (String) The strategy to use for auth. Supports noauth or keystone. diff --git a/doc/source/config-reference/tables/cinder-backups.rst b/doc/source/config-reference/tables/cinder-backups.rst new file mode 100644 index 00000000000..75cc3d6e1fa --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups.rst @@ -0,0 +1,48 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups: + +.. list-table:: Description of backups configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_api_class`` = ``cinder.backup.api.API`` + - (String) The full class name of the volume backup API class + * - ``backup_compression_algorithm`` = ``zlib`` + - (String) Compression algorithm (None to disable) + * - ``backup_driver`` = ``cinder.backup.drivers.swift`` + - (String) Driver to use for backups. + * - ``backup_manager`` = ``cinder.backup.manager.BackupManager`` + - (String) Full class name for the Manager for volume backup + * - ``backup_metadata_version`` = ``2`` + - (Integer) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. + * - ``backup_name_template`` = ``backup-%s`` + - (String) Template string to be used to generate backup names + * - ``backup_object_number_per_notification`` = ``10`` + - (Integer) The number of chunks or objects, for which one Ceilometer notification will be sent + * - ``backup_service_inithost_offload`` = ``True`` + - (Boolean) Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted. + * - ``backup_timer_interval`` = ``120`` + - (Integer) Interval, in seconds, between two progress notifications reporting the backup status + * - ``backup_use_same_host`` = ``False`` + - (Boolean) Backup services use same backend. + * - ``backup_use_temp_snapshot`` = ``False`` + - (Boolean) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path. + * - ``snapshot_check_timeout`` = ``3600`` + - (Integer) How long we check whether a snapshot is finished before we give up + * - ``snapshot_name_template`` = ``snapshot-%s`` + - (String) Template string to be used to generate snapshot names + * - ``snapshot_same_host`` = ``True`` + - (Boolean) Create volume from snapshot at the host where snapshot resides diff --git a/doc/source/config-reference/tables/cinder-backups_ceph.rst b/doc/source/config-reference/tables/cinder-backups_ceph.rst new file mode 100644 index 00000000000..c28f75cf572 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_ceph.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_ceph: + +.. list-table:: Description of Ceph backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_ceph_chunk_size`` = ``134217728`` + - (Integer) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. + * - ``backup_ceph_conf`` = ``/etc/ceph/ceph.conf`` + - (String) Ceph configuration file to use. + * - ``backup_ceph_pool`` = ``backups`` + - (String) The Ceph pool where volume backups are stored. + * - ``backup_ceph_stripe_count`` = ``0`` + - (Integer) RBD stripe count to use when creating a backup image. + * - ``backup_ceph_stripe_unit`` = ``0`` + - (Integer) RBD stripe unit to use when creating a backup image. + * - ``backup_ceph_user`` = ``cinder`` + - (String) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. + * - ``restore_discard_excess_bytes`` = ``True`` + - (Boolean) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. diff --git a/doc/source/config-reference/tables/cinder-backups_gcs.rst b/doc/source/config-reference/tables/cinder-backups_gcs.rst new file mode 100644 index 00000000000..84ffbf00620 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_gcs.rst @@ -0,0 +1,48 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_gcs: + +.. list-table:: Description of GCS backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_gcs_block_size`` = ``32768`` + - (Integer) The size in bytes that changes are tracked for incremental backups. backup_gcs_object_size has to be multiple of backup_gcs_block_size. + * - ``backup_gcs_bucket`` = ``None`` + - (String) The GCS bucket to use. + * - ``backup_gcs_bucket_location`` = ``US`` + - (String) Location of GCS bucket. + * - ``backup_gcs_credential_file`` = ``None`` + - (String) Absolute path of GCS service account credential file. + * - ``backup_gcs_enable_progress_timer`` = ``True`` + - (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer. + * - ``backup_gcs_num_retries`` = ``3`` + - (Integer) Number of times to retry. + * - ``backup_gcs_object_size`` = ``52428800`` + - (Integer) The size in bytes of GCS backup objects. + * - ``backup_gcs_project_id`` = ``None`` + - (String) Owner project id for GCS bucket. + * - ``backup_gcs_proxy_url`` = ``None`` + - (URI) URL for http proxy access. + * - ``backup_gcs_reader_chunk_size`` = ``2097152`` + - (Integer) GCS object will be downloaded in chunks of bytes. + * - ``backup_gcs_retry_error_codes`` = ``429`` + - (List) List of GCS error codes. + * - ``backup_gcs_storage_class`` = ``NEARLINE`` + - (String) Storage class of GCS bucket. + * - ``backup_gcs_user_agent`` = ``gcscinder`` + - (String) Http user-agent string for gcs api. + * - ``backup_gcs_writer_chunk_size`` = ``2097152`` + - (Integer) GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the file is to be uploaded as a single chunk. diff --git a/doc/source/config-reference/tables/cinder-backups_glusterfs.rst b/doc/source/config-reference/tables/cinder-backups_glusterfs.rst new file mode 100644 index 00000000000..dbae37b0983 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_glusterfs.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_glusterfs: + +.. list-table:: Description of GlusterFS backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``glusterfs_backup_mount_point`` = ``$state_path/backup_mount`` + - (String) Base dir containing mount point for gluster share. + * - ``glusterfs_backup_share`` = ``None`` + - (String) GlusterFS share in : format. Eg: 1.2.3.4:backup_vol diff --git a/doc/source/config-reference/tables/cinder-backups_nfs.rst b/doc/source/config-reference/tables/cinder-backups_nfs.rst new file mode 100644 index 00000000000..53436a95f84 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_nfs.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_nfs: + +.. list-table:: Description of NFS backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_container`` = ``None`` + - (String) Custom directory to use for backups. + * - ``backup_enable_progress_timer`` = ``True`` + - (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. + * - ``backup_file_size`` = ``1999994880`` + - (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. + * - ``backup_mount_options`` = ``None`` + - (String) Mount options passed to the NFS client. See NFS man page for details. + * - ``backup_mount_point_base`` = ``$state_path/backup_mount`` + - (String) Base dir containing mount point for NFS share. + * - ``backup_sha_block_size_bytes`` = ``32768`` + - (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. + * - ``backup_share`` = ``None`` + - (String) NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format. diff --git a/doc/source/config-reference/tables/cinder-backups_posix.rst b/doc/source/config-reference/tables/cinder-backups_posix.rst new file mode 100644 index 00000000000..c6113a7d3aa --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_posix.rst @@ -0,0 +1,30 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_posix: + +.. list-table:: Description of POSIX backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_container`` = ``None`` + - (String) Custom directory to use for backups. + * - ``backup_enable_progress_timer`` = ``True`` + - (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. + * - ``backup_file_size`` = ``1999994880`` + - (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. + * - ``backup_posix_path`` = ``$state_path/backup`` + - (String) Path specifying where to store backups. + * - ``backup_sha_block_size_bytes`` = ``32768`` + - (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. diff --git a/doc/source/config-reference/tables/cinder-backups_swift.rst b/doc/source/config-reference/tables/cinder-backups_swift.rst new file mode 100644 index 00000000000..8439b4445db --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_swift.rst @@ -0,0 +1,56 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_swift: + +.. list-table:: Description of Swift backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_swift_auth`` = ``per_user`` + - (String) Swift authentication mechanism + * - ``backup_swift_auth_version`` = ``1`` + - (String) Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0 or "3" for auth 3.0 + * - ``backup_swift_block_size`` = ``32768`` + - (Integer) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size. + * - ``backup_swift_ca_cert_file`` = ``None`` + - (String) Location of the CA certificate file to use for swift client requests. + * - ``backup_swift_container`` = ``volumebackups`` + - (String) The default Swift container to use + * - ``backup_swift_enable_progress_timer`` = ``True`` + - (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer. + * - ``backup_swift_key`` = ``None`` + - (String) Swift key for authentication + * - ``backup_swift_object_size`` = ``52428800`` + - (Integer) The size in bytes of Swift backup objects + * - ``backup_swift_project`` = ``None`` + - (String) Swift project/account name. Required when connecting to an auth 3.0 system + * - ``backup_swift_project_domain`` = ``None`` + - (String) Swift project domain name. Required when connecting to an auth 3.0 system + * - ``backup_swift_retry_attempts`` = ``3`` + - (Integer) The number of retries to make for Swift operations + * - ``backup_swift_retry_backoff`` = ``2`` + - (Integer) The backoff time in seconds between Swift retries + * - ``backup_swift_tenant`` = ``None`` + - (String) Swift tenant/account name. Required when connecting to an auth 2.0 system + * - ``backup_swift_url`` = ``None`` + - (URI) The URL of the Swift endpoint + * - ``backup_swift_user`` = ``None`` + - (String) Swift user name + * - ``backup_swift_user_domain`` = ``None`` + - (String) Swift user domain name. Required when connecting to an auth 3.0 system + * - ``keystone_catalog_info`` = ``identity:Identity Service:publicURL`` + - (String) Info to match when looking for keystone in the service catalog. Format is: separated values of the form: :: - Only used if backup_swift_auth_url is unset + * - ``swift_catalog_info`` = ``object-store:swift:publicURL`` + - (String) Info to match when looking for swift in the service catalog. Format is: separated values of the form: :: - Only used if backup_swift_url is unset diff --git a/doc/source/config-reference/tables/cinder-backups_tsm.rst b/doc/source/config-reference/tables/cinder-backups_tsm.rst new file mode 100644 index 00000000000..a3cd05228f2 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-backups_tsm.rst @@ -0,0 +1,26 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-backups_tsm: + +.. list-table:: Description of IBM Tivoli Storage Manager backup driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_tsm_compression`` = ``True`` + - (Boolean) Enable or Disable compression for backups + * - ``backup_tsm_password`` = ``password`` + - (String) TSM password for the running username + * - ``backup_tsm_volume_prefix`` = ``backup`` + - (String) Volume prefix for the backup id when backing up to TSM diff --git a/doc/source/config-reference/tables/cinder-block-device.rst b/doc/source/config-reference/tables/cinder-block-device.rst new file mode 100644 index 00000000000..dcbf53d0295 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-block-device.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-block-device: + +.. list-table:: Description of block device configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``available_devices`` = + - (List) List of all available devices diff --git a/doc/source/config-reference/tables/cinder-blockbridge.rst b/doc/source/config-reference/tables/cinder-blockbridge.rst new file mode 100644 index 00000000000..f828eab7683 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-blockbridge.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-blockbridge: + +.. list-table:: Description of BlockBridge EPS volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``blockbridge_api_host`` = ``None`` + - (String) IP address/hostname of Blockbridge API. + * - ``blockbridge_api_port`` = ``None`` + - (Integer) Override HTTPS port to connect to Blockbridge API server. + * - ``blockbridge_auth_password`` = ``None`` + - (String) Blockbridge API password (for auth scheme 'password') + * - ``blockbridge_auth_scheme`` = ``token`` + - (String) Blockbridge API authentication scheme (token or password) + * - ``blockbridge_auth_token`` = ``None`` + - (String) Blockbridge API token (for auth scheme 'token') + * - ``blockbridge_auth_user`` = ``None`` + - (String) Blockbridge API user (for auth scheme 'password') + * - ``blockbridge_default_pool`` = ``None`` + - (String) Default pool name if unspecified. + * - ``blockbridge_pools`` = ``{'OpenStack': '+openstack'}`` + - (Dict) Defines the set of exposed pools and their associated backend query strings diff --git a/doc/source/config-reference/tables/cinder-cloudbyte.rst b/doc/source/config-reference/tables/cinder-cloudbyte.rst new file mode 100644 index 00000000000..74d7e87a6c9 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-cloudbyte.rst @@ -0,0 +1,44 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-cloudbyte: + +.. list-table:: Description of CloudByte volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``cb_account_name`` = ``None`` + - (String) CloudByte storage specific account name. This maps to a project name in OpenStack. + * - ``cb_add_qosgroup`` = ``{'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'}`` + - (Dict) These values will be used for CloudByte storage's addQos API call. + * - ``cb_apikey`` = ``None`` + - (String) Driver will use this API key to authenticate against the CloudByte storage's management interface. + * - ``cb_auth_group`` = ``None`` + - (String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None. + * - ``cb_confirm_volume_create_retries`` = ``3`` + - (Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts. + * - ``cb_confirm_volume_create_retry_interval`` = ``5`` + - (Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage. + * - ``cb_confirm_volume_delete_retries`` = ``3`` + - (Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts. + * - ``cb_confirm_volume_delete_retry_interval`` = ``5`` + - (Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage. + * - ``cb_create_volume`` = ``{'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'}`` + - (Dict) These values will be used for CloudByte storage's createVolume API call. + * - ``cb_tsm_name`` = ``None`` + - (String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM. + * - ``cb_update_file_system`` = ``compression, sync, noofcopies, readonly`` + - (List) These values will be used for CloudByte storage's updateFileSystem API call. + * - ``cb_update_qos_group`` = ``iops, latency, graceallowed`` + - (List) These values will be used for CloudByte storage's updateQosGroup API call. diff --git a/doc/source/config-reference/tables/cinder-coho.rst b/doc/source/config-reference/tables/cinder-coho.rst new file mode 100644 index 00000000000..d15da09d11f --- /dev/null +++ b/doc/source/config-reference/tables/cinder-coho.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-coho: + +.. list-table:: Description of Coho volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``coho_rpc_port`` = ``2049`` + - (Integer) RPC port to connect to Coho Data MicroArray diff --git a/doc/source/config-reference/tables/cinder-common.rst b/doc/source/config-reference/tables/cinder-common.rst new file mode 100644 index 00000000000..6f77c02f3aa --- /dev/null +++ b/doc/source/config-reference/tables/cinder-common.rst @@ -0,0 +1,162 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-common: + +.. list-table:: Description of common configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``allow_availability_zone_fallback`` = ``False`` + - (Boolean) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing. + * - ``chap`` = ``disabled`` + - (String) CHAP authentication mode, effective only for iscsi (disabled|enabled) + * - ``chap_password`` = + - (String) Password for specified CHAP account name. + * - ``chap_username`` = + - (String) CHAP user name. + * - ``chiscsi_conf`` = ``/etc/chelsio-iscsi/chiscsi.conf`` + - (String) Chiscsi (CXT) global defaults configuration file + * - ``cinder_internal_tenant_project_id`` = ``None`` + - (String) ID of the project which will be used as the Cinder internal tenant. + * - ``cinder_internal_tenant_user_id`` = ``None`` + - (String) ID of the user to be used in volume operations as the Cinder internal tenant. + * - ``cluster`` = ``None`` + - (String) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported. + * - ``compute_api_class`` = ``cinder.compute.nova.API`` + - (String) The full class name of the compute API class to use + * - ``connection_type`` = ``iscsi`` + - (String) Connection type to the IBM Storage Array + * - ``consistencygroup_api_class`` = ``cinder.consistencygroup.api.API`` + - (String) The full class name of the consistencygroup API class + * - ``default_availability_zone`` = ``None`` + - (String) Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes. + * - ``default_group_type`` = ``None`` + - (String) Default group type to use + * - ``default_volume_type`` = ``None`` + - (String) Default volume type to use + * - ``driver_client_cert`` = ``None`` + - (String) The path to the client certificate for verification, if the driver supports it. + * - ``driver_client_cert_key`` = ``None`` + - (String) The path to the client certificate key for verification, if the driver supports it. + * - ``driver_data_namespace`` = ``None`` + - (String) Namespace for driver private data values to be saved in. + * - ``driver_ssl_cert_path`` = ``None`` + - (String) Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend + * - ``driver_ssl_cert_verify`` = ``False`` + - (Boolean) If set to True the http client will validate the SSL certificate of the backend endpoint. + * - ``enable_force_upload`` = ``False`` + - (Boolean) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it. + * - ``enable_new_services`` = ``True`` + - (Boolean) Services to be added to the available pool on create + * - ``enable_unsupported_driver`` = ``False`` + - (Boolean) Set this to True when you want to allow an unsupported driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. + * - ``end_time`` = ``None`` + - (String) If this option is specified then the end time specified is used instead of the end time of the last completed audit period. + * - ``enforce_multipath_for_image_xfer`` = ``False`` + - (Boolean) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. + * - ``executor_thread_pool_size`` = ``64`` + - (Integer) Size of executor thread pool. + * - ``fatal_exception_format_errors`` = ``False`` + - (Boolean) Make exception message format errors fatal. + * - ``group_api_class`` = ``cinder.group.api.API`` + - (String) The full class name of the group API class + * - ``host`` = ``localhost`` + - (String) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address. + * - ``iet_conf`` = ``/etc/iet/ietd.conf`` + - (String) IET configuration file + * - ``iscsi_secondary_ip_addresses`` = + - (List) The list of secondary IP addresses of the iSCSI daemon + * - ``max_over_subscription_ratio`` = ``20.0`` + - (Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. The ratio has to be a minimum of 1.0. + * - ``monkey_patch`` = ``False`` + - (Boolean) Enable monkey patching + * - ``monkey_patch_modules`` = + - (List) List of modules/decorators to monkey patch + * - ``my_ip`` = ``10.0.0.1`` + - (String) IP address of this host + * - ``no_snapshot_gb_quota`` = ``False`` + - (Boolean) Whether snapshots count against gigabyte quota + * - ``num_shell_tries`` = ``3`` + - (Integer) Number of times to attempt to run flakey shell commands + * - ``os_privileged_user_auth_url`` = ``None`` + - (URI) Auth URL associated with the OpenStack privileged account. + * - ``os_privileged_user_name`` = ``None`` + - (String) OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights. + * - ``os_privileged_user_password`` = ``None`` + - (String) Password associated with the OpenStack privileged account. + * - ``os_privileged_user_tenant`` = ``None`` + - (String) Tenant name associated with the OpenStack privileged account. + * - ``periodic_fuzzy_delay`` = ``60`` + - (Integer) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) + * - ``periodic_interval`` = ``60`` + - (Integer) Interval, in seconds, between running periodic tasks + * - ``replication_device`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:,key1:value1,key2:value2... + * - ``report_discard_supported`` = ``False`` + - (Boolean) Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. + * - ``report_interval`` = ``10`` + - (Integer) Interval, in seconds, between nodes reporting state to datastore + * - ``reserved_percentage`` = ``0`` + - (Integer) The percentage of backend capacity is reserved + * - ``rootwrap_config`` = ``/etc/cinder/rootwrap.conf`` + - (String) Path to the rootwrap configuration file to use for running commands as root + * - ``send_actions`` = ``False`` + - (Boolean) Send the volume and snapshot create and delete notifications generated in the specified period. + * - ``service_down_time`` = ``60`` + - (Integer) Maximum time since last check-in for a service to be considered up + * - ``ssh_hosts_key_file`` = ``$state_path/ssh_known_hosts`` + - (String) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts + * - ``start_time`` = ``None`` + - (String) If this option is specified then the start time specified is used instead of the start time of the last completed audit period. + * - ``state_path`` = ``/var/lib/cinder`` + - (String) Top-level directory for maintaining cinder's state + * - ``storage_availability_zone`` = ``nova`` + - (String) Availability zone of this node + * - ``storage_protocol`` = ``iscsi`` + - (String) Protocol for transferring data between host and storage back-end. + * - ``strict_ssh_host_key_policy`` = ``False`` + - (Boolean) Option to enable strict host key checking. When set to "True" Cinder will only connect to systems with a host key present in the configured "ssh_hosts_key_file". When set to "False" the host key will be saved upon first connection and used for subsequent connections. Default=False + * - ``suppress_requests_ssl_warnings`` = ``False`` + - (Boolean) Suppress requests library SSL certificate warnings. + * - ``tcp_keepalive`` = ``True`` + - (Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket. + * - ``tcp_keepalive_count`` = ``None`` + - (Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. + * - ``tcp_keepalive_interval`` = ``None`` + - (Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. + * - ``until_refresh`` = ``0`` + - (Integer) Count of reservations until usage is refreshed + * - ``use_chap_auth`` = ``False`` + - (Boolean) Option to enable/disable CHAP authentication for targets. + * - ``use_forwarded_for`` = ``False`` + - (Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. + * - **[healthcheck]** + - + * - ``backends`` = + - (List) Additional backends that can perform health checks and report that information back as part of a request. + * - ``detailed`` = ``False`` + - (Boolean) Show more detailed information as part of the response + * - ``disable_by_file_path`` = ``None`` + - (String) Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. + * - ``disable_by_file_paths`` = + - (List) Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. + * - ``path`` = ``/healthcheck`` + - (String) DEPRECATED: The path to respond to healtcheck requests on. + * - **[key_manager]** + - + * - ``api_class`` = ``castellan.key_manager.barbican_key_manager.BarbicanKeyManager`` + - (String) The full class name of the key manager API class + * - ``fixed_key`` = ``None`` + - (String) Fixed key returned by key manager, specified in hex diff --git a/doc/source/config-reference/tables/cinder-compute.rst b/doc/source/config-reference/tables/cinder-compute.rst new file mode 100644 index 00000000000..99e411fc1b4 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-compute.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-compute: + +.. list-table:: Description of Compute configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nova_api_insecure`` = ``False`` + - (Boolean) Allow to perform insecure SSL requests to nova + * - ``nova_ca_certificates_file`` = ``None`` + - (String) Location of ca certificates file to use for nova client requests. + * - ``nova_catalog_admin_info`` = ``compute:Compute Service:adminURL`` + - (String) Same as nova_catalog_info, but for admin endpoint. + * - ``nova_catalog_info`` = ``compute:Compute Service:publicURL`` + - (String) Match this value when searching for nova in the service catalog. Format is: separated values of the form: :: + * - ``nova_endpoint_admin_template`` = ``None`` + - (String) Same as nova_endpoint_template, but for admin endpoint. + * - ``nova_endpoint_template`` = ``None`` + - (String) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/%(project_id)s + * - ``os_region_name`` = ``None`` + - (String) Region name of this node diff --git a/doc/source/config-reference/tables/cinder-coordination.rst b/doc/source/config-reference/tables/cinder-coordination.rst new file mode 100644 index 00000000000..0b5eb8d3884 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-coordination.rst @@ -0,0 +1,28 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-coordination: + +.. list-table:: Description of Coordination configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[coordination]** + - + * - ``backend_url`` = ``file://$state_path`` + - (String) The backend URL to use for distributed coordination. + * - ``heartbeat`` = ``1.0`` + - (Floating point) Number of seconds between heartbeats for distributed coordination. + * - ``initial_reconnect_backoff`` = ``0.1`` + - (Floating point) Initial number of seconds to wait after failed reconnection. + * - ``max_reconnect_backoff`` = ``60.0`` + - (Floating point) Maximum number of seconds between sequential reconnection retries. diff --git a/doc/source/config-reference/tables/cinder-coprhd.rst b/doc/source/config-reference/tables/cinder-coprhd.rst new file mode 100644 index 00000000000..d1f6ab9ca31 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-coprhd.rst @@ -0,0 +1,48 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-coprhd: + +.. list-table:: Description of Coprhd volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``coprhd_emulate_snapshot`` = ``False`` + - (Boolean) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX + * - ``coprhd_hostname`` = ``None`` + - (String) Hostname for the CoprHD Instance + * - ``coprhd_password`` = ``None`` + - (String) Password for accessing the CoprHD Instance + * - ``coprhd_port`` = ``4443`` + - (Port number) Port for the CoprHD Instance + * - ``coprhd_project`` = ``None`` + - (String) Project to utilize within the CoprHD Instance + * - ``coprhd_scaleio_rest_gateway_host`` = ``None`` + - (String) Rest Gateway IP or FQDN for Scaleio + * - ``coprhd_scaleio_rest_gateway_port`` = ``4984`` + - (Port number) Rest Gateway Port for Scaleio + * - ``coprhd_scaleio_rest_server_password`` = ``None`` + - (String) Rest Gateway Password + * - ``coprhd_scaleio_rest_server_username`` = ``None`` + - (String) Username for Rest Gateway + * - ``coprhd_tenant`` = ``None`` + - (String) Tenant to utilize within the CoprHD Instance + * - ``coprhd_username`` = ``None`` + - (String) Username for accessing the CoprHD Instance + * - ``coprhd_varray`` = ``None`` + - (String) Virtual Array to utilize within the CoprHD Instance + * - ``scaleio_server_certificate_path`` = ``None`` + - (String) Server certificate path + * - ``scaleio_verify_server_certificate`` = ``False`` + - (Boolean) verify server certificate diff --git a/doc/source/config-reference/tables/cinder-datera.rst b/doc/source/config-reference/tables/cinder-datera.rst new file mode 100644 index 00000000000..5e5f034397a --- /dev/null +++ b/doc/source/config-reference/tables/cinder-datera.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-datera: + +.. list-table:: Description of Datera volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``datera_503_interval`` = ``5`` + - (Integer) Interval between 503 retries + * - ``datera_503_timeout`` = ``120`` + - (Integer) Timeout for HTTP 503 retry messages + * - ``datera_api_port`` = ``7717`` + - (String) Datera API port. + * - ``datera_api_version`` = ``2`` + - (String) DEPRECATED: Datera API version. + * - ``datera_debug`` = ``False`` + - (Boolean) True to set function arg and return logging + * - ``datera_debug_replica_count_override`` = ``False`` + - (Boolean) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 + * - ``datera_tenant_id`` = ``None`` + - (String) If set to 'Map' --> OpenStack project ID will be mapped implicitly to Datera tenant ID If set to 'None' --> Datera tenant ID will not be used during volume provisioning If set to anything else --> Datera tenant ID will be the provided value diff --git a/doc/source/config-reference/tables/cinder-debug.rst b/doc/source/config-reference/tables/cinder-debug.rst new file mode 100644 index 00000000000..97ff1c9e14b --- /dev/null +++ b/doc/source/config-reference/tables/cinder-debug.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-debug: + +.. list-table:: Description of logging configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``trace_flags`` = ``None`` + - (List) List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. diff --git a/doc/source/config-reference/tables/cinder-dell_emc_unity.rst b/doc/source/config-reference/tables/cinder-dell_emc_unity.rst new file mode 100644 index 00000000000..14495561d9c --- /dev/null +++ b/doc/source/config-reference/tables/cinder-dell_emc_unity.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-dell_emc_unity: + +.. list-table:: Description of Dell EMC Unity volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``unity_io_ports`` = ``None`` + - (List) A comma-separated list of iSCSI or FC ports to be used. Each port can be Unix-style glob expressions. + * - ``unity_storage_pool_names`` = ``None`` + - (List) A comma-separated list of storage pool names to be used. diff --git a/doc/source/config-reference/tables/cinder-dellsc.rst b/doc/source/config-reference/tables/cinder-dellsc.rst new file mode 100644 index 00000000000..9d9ca732b05 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-dellsc.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-dellsc: + +.. list-table:: Description of Dell Storage Center volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``dell_sc_api_port`` = ``3033`` + - (Port number) Dell API port + * - ``dell_sc_server_folder`` = ``openstack`` + - (String) Name of the server folder to use on the Storage Center + * - ``dell_sc_ssn`` = ``64702`` + - (Integer) Storage Center System Serial Number + * - ``dell_sc_verify_cert`` = ``False`` + - (Boolean) Enable HTTPS SC certificate verification + * - ``dell_sc_volume_folder`` = ``openstack`` + - (String) Name of the volume folder to use on the Storage Center + * - ``dell_server_os`` = ``Red Hat Linux 6.x`` + - (String) Server OS type to use when creating a new server on the Storage Center. + * - ``excluded_domain_ip`` = ``None`` + - (Unknown) Domain IP to be excluded from iSCSI returns. + * - ``secondary_san_ip`` = + - (String) IP address of secondary DSM controller + * - ``secondary_san_login`` = ``Admin`` + - (String) Secondary DSM user name + * - ``secondary_san_password`` = + - (String) Secondary DSM user password name + * - ``secondary_sc_api_port`` = ``3033`` + - (Port number) Secondary Dell API port diff --git a/doc/source/config-reference/tables/cinder-disco.rst b/doc/source/config-reference/tables/cinder-disco.rst new file mode 100644 index 00000000000..c4c0bb2317b --- /dev/null +++ b/doc/source/config-reference/tables/cinder-disco.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-disco: + +.. list-table:: Description of Disco volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``choice_client`` = ``None`` + - (String) Use soap client or rest client for communicating with DISCO. Possible values are "soap" or "rest". + * - ``clone_check_timeout`` = ``3600`` + - (Integer) How long we check whether a clone is finished before we give up + * - ``clone_volume_timeout`` = ``680`` + - (Integer) Create clone volume timeout. + * - ``disco_client`` = ``127.0.0.1`` + - (IP) The IP of DMS client socket server + * - ``disco_client_port`` = ``9898`` + - (Port number) The port to connect DMS client socket server + * - ``disco_src_api_port`` = ``8080`` + - (Port number) The port of DISCO source API + * - ``disco_wsdl_path`` = ``/etc/cinder/DISCOService.wsdl`` + - (String) DEPRECATED: Path to the wsdl file to communicate with DISCO request manager + * - ``rest_ip`` = ``None`` + - (IP) The IP address of the REST server + * - ``restore_check_timeout`` = ``3600`` + - (Integer) How long we check whether a restore is finished before we give up + * - ``retry_interval`` = ``1`` + - (Integer) How long we wait before retrying to get an item detail diff --git a/doc/source/config-reference/tables/cinder-dothill.rst b/doc/source/config-reference/tables/cinder-dothill.rst new file mode 100644 index 00000000000..b796abc9106 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-dothill.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-dothill: + +.. list-table:: Description of Dot Hill volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``dothill_api_protocol`` = ``https`` + - (String) DotHill API interface protocol. + * - ``dothill_backend_name`` = ``A`` + - (String) Pool or Vdisk name to use for volume creation. + * - ``dothill_backend_type`` = ``virtual`` + - (String) linear (for Vdisk) or virtual (for Pool). + * - ``dothill_iscsi_ips`` = + - (List) List of comma-separated target iSCSI IP addresses. + * - ``dothill_verify_certificate`` = ``False`` + - (Boolean) Whether to verify DotHill array SSL certificate. + * - ``dothill_verify_certificate_path`` = ``None`` + - (String) DotHill array SSL certificate path. diff --git a/doc/source/config-reference/tables/cinder-drbd.rst b/doc/source/config-reference/tables/cinder-drbd.rst new file mode 100644 index 00000000000..221044da0cc --- /dev/null +++ b/doc/source/config-reference/tables/cinder-drbd.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-drbd: + +.. list-table:: Description of DRBD configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``drbdmanage_devs_on_controller`` = ``True`` + - (Boolean) If set, the c-vol node will receive a useable /dev/drbdX device, even if the actual data is stored on other nodes only. This is useful for debugging, maintenance, and to be able to do the iSCSI export from the c-vol node. + * - ``drbdmanage_disk_options`` = ``{"c-min-rate": "4M"}`` + - (String) Disk options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. + * - ``drbdmanage_net_options`` = ``{"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}`` + - (String) Net options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. + * - ``drbdmanage_redundancy`` = ``1`` + - (Integer) Number of nodes that should replicate the data. + * - ``drbdmanage_resize_plugin`` = ``drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize`` + - (String) Volume resize completion wait plugin. + * - ``drbdmanage_resize_policy`` = ``{"timeout": "60"}`` + - (String) Volume resize completion wait policy. + * - ``drbdmanage_resource_options`` = ``{"auto-promote-timeout": "300"}`` + - (String) Resource options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. + * - ``drbdmanage_resource_plugin`` = ``drbdmanage.plugins.plugins.wait_for.WaitForResource`` + - (String) Resource deployment completion wait plugin. + * - ``drbdmanage_resource_policy`` = ``{"ratio": "0.51", "timeout": "60"}`` + - (String) Resource deployment completion wait policy. + * - ``drbdmanage_snapshot_plugin`` = ``drbdmanage.plugins.plugins.wait_for.WaitForSnapshot`` + - (String) Snapshot completion wait plugin. + * - ``drbdmanage_snapshot_policy`` = ``{"count": "1", "timeout": "60"}`` + - (String) Snapshot completion wait policy. diff --git a/doc/source/config-reference/tables/cinder-emc.rst b/doc/source/config-reference/tables/cinder-emc.rst new file mode 100644 index 00000000000..3bbd385e70c --- /dev/null +++ b/doc/source/config-reference/tables/cinder-emc.rst @@ -0,0 +1,48 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-emc: + +.. list-table:: Description of EMC configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``check_max_pool_luns_threshold`` = ``False`` + - (Boolean) Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False. + * - ``cinder_emc_config_file`` = ``/etc/cinder/cinder_emc_config.xml`` + - (String) Use this file for cinder emc plugin config data + * - ``destroy_empty_storage_group`` = ``False`` + - (Boolean) To destroy storage group when the last LUN is removed from it. By default, the value is False. + * - ``force_delete_lun_in_storagegroup`` = ``False`` + - (Boolean) Delete a LUN even if it is in Storage Groups. By default, the value is False. + * - ``initiator_auto_deregistration`` = ``False`` + - (Boolean) Automatically deregister initiators after the related storage group is destroyed. By default, the value is False. + * - ``initiator_auto_registration`` = ``False`` + - (Boolean) Automatically register initiators. By default, the value is False. + * - ``io_port_list`` = ``None`` + - (List) Comma separated iSCSI or FC ports to be used in Nova or Cinder. + * - ``iscsi_initiators`` = ``None`` + - (String) Mapping between hostname and its iSCSI initiator IP addresses. + * - ``max_luns_per_storage_group`` = ``255`` + - (Integer) Default max number of LUNs in a storage group. By default, the value is 255. + * - ``multi_pool_support`` = ``False`` + - (String) Use this value to specify multi-pool support for VMAX3 + * - ``naviseccli_path`` = ``None`` + - (String) Naviseccli Path. + * - ``storage_vnx_authentication_type`` = ``global`` + - (String) VNX authentication scope type. By default, the value is global. + * - ``storage_vnx_pool_names`` = ``None`` + - (List) Comma-separated list of storage pool names to be used. + * - ``storage_vnx_security_file_dir`` = ``None`` + - (String) Directory path that contains the VNX security file. Make sure the security file is generated first. diff --git a/doc/source/config-reference/tables/cinder-emc_sio.rst b/doc/source/config-reference/tables/cinder-emc_sio.rst new file mode 100644 index 00000000000..b38554e63cc --- /dev/null +++ b/doc/source/config-reference/tables/cinder-emc_sio.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-emc_sio: + +.. list-table:: Description of EMC SIO volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``sio_max_over_subscription_ratio`` = ``10.0`` + - (Floating point) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0. + * - ``sio_protection_domain_id`` = ``None`` + - (String) Protection Domain ID. + * - ``sio_protection_domain_name`` = ``None`` + - (String) Protection Domain name. + * - ``sio_rest_server_port`` = ``443`` + - (String) REST server port. + * - ``sio_round_volume_capacity`` = ``True`` + - (Boolean) Round up volume capacity. + * - ``sio_server_certificate_path`` = ``None`` + - (String) Server certificate path. + * - ``sio_storage_pool_id`` = ``None`` + - (String) Storage Pool ID. + * - ``sio_storage_pool_name`` = ``None`` + - (String) Storage Pool name. + * - ``sio_storage_pools`` = ``None`` + - (String) Storage Pools. + * - ``sio_unmap_volume_before_deletion`` = ``False`` + - (Boolean) Unmap volume before deletion. + * - ``sio_verify_server_certificate`` = ``False`` + - (Boolean) Verify server certificate. diff --git a/doc/source/config-reference/tables/cinder-emc_xtremio.rst b/doc/source/config-reference/tables/cinder-emc_xtremio.rst new file mode 100644 index 00000000000..651a97160d5 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-emc_xtremio.rst @@ -0,0 +1,28 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-emc_xtremio: + +.. list-table:: Description of EMC XtremIO volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``xtremio_array_busy_retry_count`` = ``5`` + - (Integer) Number of retries in case array is busy + * - ``xtremio_array_busy_retry_interval`` = ``5`` + - (Integer) Interval between retries in case array is busy + * - ``xtremio_cluster_name`` = + - (String) XMS cluster id in multi-cluster environment + * - ``xtremio_volumes_per_glance_cache`` = ``100`` + - (Integer) Number of volumes created from each cached glance image diff --git a/doc/source/config-reference/tables/cinder-eqlx.rst b/doc/source/config-reference/tables/cinder-eqlx.rst new file mode 100644 index 00000000000..21d71fa4d8f --- /dev/null +++ b/doc/source/config-reference/tables/cinder-eqlx.rst @@ -0,0 +1,26 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-eqlx: + +.. list-table:: Description of Dell EqualLogic volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``eqlx_cli_max_retries`` = ``5`` + - (Integer) Maximum retry count for reconnection. Default is 5. + * - ``eqlx_group_name`` = ``group-0`` + - (String) Group name to use for creating volumes. Defaults to "group-0". + * - ``eqlx_pool`` = ``default`` + - (String) Pool in which volumes will be created. Defaults to "default". diff --git a/doc/source/config-reference/tables/cinder-eternus.rst b/doc/source/config-reference/tables/cinder-eternus.rst new file mode 100644 index 00000000000..6c3faeb3791 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-eternus.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-eternus: + +.. list-table:: Description of Eternus volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``cinder_eternus_config_file`` = ``/etc/cinder/cinder_fujitsu_eternus_dx.xml`` + - (String) config file for cinder eternus_dx volume driver diff --git a/doc/source/config-reference/tables/cinder-falconstor.rst b/doc/source/config-reference/tables/cinder-falconstor.rst new file mode 100644 index 00000000000..d131ef89709 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-falconstor.rst @@ -0,0 +1,30 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-falconstor: + +.. list-table:: Description of Falconstor volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``additional_retry_list`` = + - (String) FSS additional retry list, separate by ; + * - ``fss_debug`` = ``False`` + - (Boolean) Enable HTTP debugging to FSS + * - ``fss_pools`` = ``{}`` + - (Dict) FSS pool ID list in which FalconStor volumes are stored. If you have only one pool, use ``A:``. You can also have up to two storage pools, P for primary and O for all supporting devices. The usage is ``P:,O:`` + * - ``fss_san_secondary_ip`` = + - (String) Specifies FSS secondary management IP to be used if san_ip is invalid or becomes inaccessible. + * - ``san_thin_provision`` = + - (Boolean) Enable FSS thin provision. diff --git a/doc/source/config-reference/tables/cinder-flashsystem.rst b/doc/source/config-reference/tables/cinder-flashsystem.rst new file mode 100644 index 00000000000..ad6c141e409 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-flashsystem.rst @@ -0,0 +1,28 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-flashsystem: + +.. list-table:: Description of IBM FlashSystem volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``flashsystem_connection_protocol`` = ``FC`` + - (String) Connection protocol should be FC. (Default is FC.) + * - ``flashsystem_iscsi_portid`` = ``0`` + - (Integer) Default iSCSI Port ID of FlashSystem. (Default port is 0.) + * - ``flashsystem_multihostmap_enabled`` = ``True`` + - (Boolean) Allows vdisk to multi host mapping. (Default is True) + * - ``flashsystem_multipath_enabled`` = ``False`` + - (Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release. diff --git a/doc/source/config-reference/tables/cinder-fusionio.rst b/doc/source/config-reference/tables/cinder-fusionio.rst new file mode 100644 index 00000000000..e1e7229e13e --- /dev/null +++ b/doc/source/config-reference/tables/cinder-fusionio.rst @@ -0,0 +1,30 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-fusionio: + +.. list-table:: Description of Fusion-io driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``dsware_isthin`` = ``False`` + - (Boolean) The flag of thin storage allocation. + * - ``dsware_manager`` = + - (String) Fusionstorage manager ip addr for cinder-volume. + * - ``fusionstorageagent`` = + - (String) Fusionstorage agent ip addr range. + * - ``pool_id_filter`` = + - (List) Pool id permit to use. + * - ``pool_type`` = ``default`` + - (String) Pool type, like sata-2copy. diff --git a/doc/source/config-reference/tables/cinder-hgst.rst b/doc/source/config-reference/tables/cinder-hgst.rst new file mode 100644 index 00000000000..49ff3288b70 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hgst.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hgst: + +.. list-table:: Description of HGST volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hgst_net`` = ``Net 1 (IPv4)`` + - (String) Space network name to use for data transfer + * - ``hgst_redundancy`` = ``0`` + - (String) Should spaces be redundantly stored (1/0) + * - ``hgst_space_group`` = ``disk`` + - (String) Group to own created spaces + * - ``hgst_space_mode`` = ``0600`` + - (String) UNIX mode for created spaces + * - ``hgst_space_user`` = ``root`` + - (String) User to own created spaces + * - ``hgst_storage_servers`` = ``os:gbd0`` + - (String) Comma separated list of Space storage servers:devices. ex: os1_stor:gbd0,os2_stor:gbd0 diff --git a/doc/source/config-reference/tables/cinder-hitachi-hbsd.rst b/doc/source/config-reference/tables/cinder-hitachi-hbsd.rst new file mode 100644 index 00000000000..544f5184367 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hitachi-hbsd.rst @@ -0,0 +1,64 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hitachi-hbsd: + +.. list-table:: Description of Hitachi storage volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hitachi_add_chap_user`` = ``False`` + - (Boolean) Add CHAP user + * - ``hitachi_async_copy_check_interval`` = ``10`` + - (Integer) Interval to check copy asynchronously + * - ``hitachi_auth_method`` = ``None`` + - (String) iSCSI authentication method + * - ``hitachi_auth_password`` = ``HBSD-CHAP-password`` + - (String) iSCSI authentication password + * - ``hitachi_auth_user`` = ``HBSD-CHAP-user`` + - (String) iSCSI authentication username + * - ``hitachi_copy_check_interval`` = ``3`` + - (Integer) Interval to check copy + * - ``hitachi_copy_speed`` = ``3`` + - (Integer) Copy speed of storage system + * - ``hitachi_default_copy_method`` = ``FULL`` + - (String) Default copy method of storage system + * - ``hitachi_group_range`` = ``None`` + - (String) Range of group number + * - ``hitachi_group_request`` = ``False`` + - (Boolean) Request for creating HostGroup or iSCSI Target + * - ``hitachi_horcm_add_conf`` = ``True`` + - (Boolean) Add to HORCM configuration + * - ``hitachi_horcm_numbers`` = ``200,201`` + - (String) Instance numbers for HORCM + * - ``hitachi_horcm_password`` = ``None`` + - (String) Password of storage system for HORCM + * - ``hitachi_horcm_resource_lock_timeout`` = ``600`` + - (Integer) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200. + * - ``hitachi_horcm_user`` = ``None`` + - (String) Username of storage system for HORCM + * - ``hitachi_ldev_range`` = ``None`` + - (String) Range of logical device of storage system + * - ``hitachi_pool_id`` = ``None`` + - (Integer) Pool ID of storage system + * - ``hitachi_serial_number`` = ``None`` + - (String) Serial number of storage system + * - ``hitachi_target_ports`` = ``None`` + - (String) Control port names for HostGroup or iSCSI Target + * - ``hitachi_thin_pool_id`` = ``None`` + - (Integer) Thin pool ID of storage system + * - ``hitachi_unit_name`` = ``None`` + - (String) Name of an array unit + * - ``hitachi_zoning_request`` = ``False`` + - (Boolean) Request for FC Zone creating HostGroup diff --git a/doc/source/config-reference/tables/cinder-hitachi-hnas.rst b/doc/source/config-reference/tables/cinder-hitachi-hnas.rst new file mode 100644 index 00000000000..30d173e4044 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hitachi-hnas.rst @@ -0,0 +1,64 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hitachi-hnas: + +.. list-table:: Description of Hitachi HNAS iSCSI and NFS driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hds_hnas_iscsi_config_file`` = ``/opt/hds/hnas/cinder_iscsi_conf.xml`` + - (String) DEPRECATED: Legacy configuration file for HNAS iSCSI Cinder plugin. This is not needed if you fill all configuration on cinder.conf + * - ``hds_hnas_nfs_config_file`` = ``/opt/hds/hnas/cinder_nfs_conf.xml`` + - (String) DEPRECATED: Legacy configuration file for HNAS NFS Cinder plugin. This is not needed if you fill all configuration on cinder.conf + * - ``hnas_chap_enabled`` = ``True`` + - (Boolean) Whether the chap authentication is enabled in the iSCSI target or not. + * - ``hnas_cluster_admin_ip0`` = ``None`` + - (String) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups. + * - ``hnas_mgmt_ip0`` = ``None`` + - (IP) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP. + * - ``hnas_password`` = ``None`` + - (String) HNAS password. + * - ``hnas_ssc_cmd`` = ``ssc`` + - (String) Command to communicate to HNAS. + * - ``hnas_ssh_port`` = ``22`` + - (Port number) Port to be used for SSH authentication. + * - ``hnas_ssh_private_key`` = ``None`` + - (String) Path to the SSH private key used to authenticate in HNAS SMU. + * - ``hnas_svc0_hdp`` = ``None`` + - (String) Service 0 HDP + * - ``hnas_svc0_iscsi_ip`` = ``None`` + - (IP) Service 0 iSCSI IP + * - ``hnas_svc0_pool_name`` = ``None`` + - (String) Service 0 pool name + * - ``hnas_svc1_hdp`` = ``None`` + - (String) Service 1 HDP + * - ``hnas_svc1_iscsi_ip`` = ``None`` + - (IP) Service 1 iSCSI IP + * - ``hnas_svc1_pool_name`` = ``None`` + - (String) Service 1 pool name + * - ``hnas_svc2_hdp`` = ``None`` + - (String) Service 2 HDP + * - ``hnas_svc2_iscsi_ip`` = ``None`` + - (IP) Service 2 iSCSI IP + * - ``hnas_svc2_pool_name`` = ``None`` + - (String) Service 2 pool name + * - ``hnas_svc3_hdp`` = ``None`` + - (String) Service 3 HDP + * - ``hnas_svc3_iscsi_ip`` = ``None`` + - (IP) Service 3 iSCSI IP + * - ``hnas_svc3_pool_name`` = ``None`` + - (String) Service 3 pool name: + * - ``hnas_username`` = ``None`` + - (String) HNAS username. diff --git a/doc/source/config-reference/tables/cinder-hitachi-vsp.rst b/doc/source/config-reference/tables/cinder-hitachi-vsp.rst new file mode 100644 index 00000000000..648a79129c3 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hitachi-vsp.rst @@ -0,0 +1,60 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hitachi-vsp: + +.. list-table:: Description of HORCM interface module for Hitachi VSP driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``vsp_async_copy_check_interval`` = ``10`` + - (Integer) Interval in seconds at which volume pair synchronization status is checked when volume pairs are deleted. + * - ``vsp_auth_password`` = ``None`` + - (String) Password corresponding to vsp_auth_user. + * - ``vsp_auth_user`` = ``None`` + - (String) Name of the user used for CHAP authentication performed in communication between hosts and iSCSI targets on the storage ports. + * - ``vsp_compute_target_ports`` = ``None`` + - (List) IDs of the storage ports used to attach volumes to compute nodes. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). + * - ``vsp_copy_check_interval`` = ``3`` + - (Integer) Interval in seconds at which volume pair synchronization status is checked when volume pairs are created. + * - ``vsp_copy_speed`` = ``3`` + - (Integer) Speed at which data is copied by Shadow Image. 1 or 2 indicates low speed, 3 indicates middle speed, and a value between 4 and 15 indicates high speed. + * - ``vsp_default_copy_method`` = ``FULL`` + - (String) Method of volume copy. FULL indicates full data copy by Shadow Image and THIN indicates differential data copy by Thin Image. + * - ``vsp_group_request`` = ``False`` + - (Boolean) If True, the driver will create host groups or iSCSI targets on storage ports as needed. + * - ``vsp_horcm_add_conf`` = ``True`` + - (Boolean) If True, the driver will create or update the Command Control Interface configuration file as needed. + * - ``vsp_horcm_numbers`` = ``200, 201`` + - (List) Command Control Interface instance numbers in the format of 'xxx,yyy'. The second one is for Shadow Image operation and the first one is for other purposes. + * - ``vsp_horcm_pair_target_ports`` = ``None`` + - (List) IDs of the storage ports used to copy volumes by Shadow Image or Thin Image. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). + * - ``vsp_horcm_password`` = ``None`` + - (String) Password corresponding to vsp_horcm_user. + * - ``vsp_horcm_user`` = ``None`` + - (String) Name of the user on the storage system. + * - ``vsp_ldev_range`` = ``None`` + - (String) Range of the LDEV numbers in the format of 'xxxx-yyyy' that can be used by the driver. Values can be in decimal format (e.g. 1000) or in colon-separated hexadecimal format (e.g. 00:03:E8). + * - ``vsp_pool`` = ``None`` + - (String) Pool number or pool name of the DP pool. + * - ``vsp_storage_id`` = ``None`` + - (String) Product number of the storage system. + * - ``vsp_target_ports`` = ``None`` + - (List) IDs of the storage ports used to attach volumes to the controller node. To specify multiple ports, connect them by commas (e.g. CL1-A,CL2-A). + * - ``vsp_thin_pool`` = ``None`` + - (String) Pool number or pool name of the Thin Image pool. + * - ``vsp_use_chap_auth`` = ``False`` + - (Boolean) If True, CHAP authentication will be applied to communication between hosts and any of the iSCSI targets on the storage ports. + * - ``vsp_zoning_request`` = ``False`` + - (Boolean) If True, the driver will configure FC zoning between the server and the storage system provided that FC zoning manager is enabled. diff --git a/doc/source/config-reference/tables/cinder-hpe3par.rst b/doc/source/config-reference/tables/cinder-hpe3par.rst new file mode 100644 index 00000000000..418663ccda6 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hpe3par.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hpe3par: + +.. list-table:: Description of HPE 3PAR Fibre Channel and iSCSI drivers configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hpe3par_api_url`` = + - (String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 + * - ``hpe3par_cpg`` = ``OpenStack`` + - (List) List of the CPG(s) to use for volume creation + * - ``hpe3par_cpg_snap`` = + - (String) The CPG to use for Snapshots for volumes. If empty the userCPG will be used. + * - ``hpe3par_debug`` = ``False`` + - (Boolean) Enable HTTP debugging to 3PAR + * - ``hpe3par_iscsi_chap_enabled`` = ``False`` + - (Boolean) Enable CHAP authentication for iSCSI connections. + * - ``hpe3par_iscsi_ips`` = + - (List) List of target iSCSI addresses to use. + * - ``hpe3par_password`` = + - (String) 3PAR password for the user specified in hpe3par_username + * - ``hpe3par_snapshot_expiration`` = + - (String) The time in hours when a snapshot expires and is deleted. This must be larger than expiration + * - ``hpe3par_snapshot_retention`` = + - (String) The time in hours to retain a snapshot. You can't delete it before this expires. + * - ``hpe3par_username`` = + - (String) 3PAR username with the 'edit' role diff --git a/doc/source/config-reference/tables/cinder-hpelefthand.rst b/doc/source/config-reference/tables/cinder-hpelefthand.rst new file mode 100644 index 00000000000..07e87e6bfdb --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hpelefthand.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hpelefthand: + +.. list-table:: Description of HPE LeftHand/StoreVirtual driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hpelefthand_api_url`` = ``None`` + - (URI) HPE LeftHand WSAPI Server Url like https://:8081/lhos + * - ``hpelefthand_clustername`` = ``None`` + - (String) HPE LeftHand cluster name + * - ``hpelefthand_debug`` = ``False`` + - (Boolean) Enable HTTP debugging to LeftHand + * - ``hpelefthand_iscsi_chap_enabled`` = ``False`` + - (Boolean) Configure CHAP authentication for iSCSI connections (Default: Disabled) + * - ``hpelefthand_password`` = ``None`` + - (String) HPE LeftHand Super user password + * - ``hpelefthand_ssh_port`` = ``16022`` + - (Port number) Port number of SSH service. + * - ``hpelefthand_username`` = ``None`` + - (String) HPE LeftHand Super user username diff --git a/doc/source/config-reference/tables/cinder-hpexp.rst b/doc/source/config-reference/tables/cinder-hpexp.rst new file mode 100644 index 00000000000..319105eb1c3 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hpexp.rst @@ -0,0 +1,56 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hpexp: + +.. list-table:: Description of HPE XP volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hpexp_async_copy_check_interval`` = ``10`` + - (Integer) Interval to check copy asynchronously + * - ``hpexp_compute_target_ports`` = ``None`` + - (List) Target port names of compute node for host group or iSCSI target + * - ``hpexp_copy_check_interval`` = ``3`` + - (Integer) Interval to check copy + * - ``hpexp_copy_speed`` = ``3`` + - (Integer) Copy speed of storage system + * - ``hpexp_default_copy_method`` = ``FULL`` + - (String) Default copy method of storage system. There are two valid values: "FULL" specifies that a full copy; "THIN" specifies that a thin copy. Default value is "FULL" + * - ``hpexp_group_request`` = ``False`` + - (Boolean) Request for creating host group or iSCSI target + * - ``hpexp_horcm_add_conf`` = ``True`` + - (Boolean) Add to HORCM configuration + * - ``hpexp_horcm_name_only_discovery`` = ``False`` + - (Boolean) Only discover a specific name of host group or iSCSI target + * - ``hpexp_horcm_numbers`` = ``200, 201`` + - (List) Instance numbers for HORCM + * - ``hpexp_horcm_resource_name`` = ``meta_resource`` + - (String) Resource group name of storage system for HORCM + * - ``hpexp_horcm_user`` = ``None`` + - (String) Username of storage system for HORCM + * - ``hpexp_ldev_range`` = ``None`` + - (String) Logical device range of storage system + * - ``hpexp_pool`` = ``None`` + - (String) Pool of storage system + * - ``hpexp_storage_cli`` = ``None`` + - (String) Type of storage command line interface + * - ``hpexp_storage_id`` = ``None`` + - (String) ID of storage system + * - ``hpexp_target_ports`` = ``None`` + - (List) Target port names for host group or iSCSI target + * - ``hpexp_thin_pool`` = ``None`` + - (String) Thin pool of storage system + * - ``hpexp_zoning_request`` = ``False`` + - (Boolean) Request for FC Zone creating host group diff --git a/doc/source/config-reference/tables/cinder-hpmsa.rst b/doc/source/config-reference/tables/cinder-hpmsa.rst new file mode 100644 index 00000000000..08362f8dcbf --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hpmsa.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hpmsa: + +.. list-table:: Description of HP MSA volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``hpmsa_api_protocol`` = ``https`` + - (String) HPMSA API interface protocol. + * - ``hpmsa_backend_name`` = ``A`` + - (String) Pool or Vdisk name to use for volume creation. + * - ``hpmsa_backend_type`` = ``virtual`` + - (String) linear (for Vdisk) or virtual (for Pool). + * - ``hpmsa_iscsi_ips`` = + - (List) List of comma-separated target iSCSI IP addresses. + * - ``hpmsa_verify_certificate`` = ``False`` + - (Boolean) Whether to verify HPMSA array SSL certificate. + * - ``hpmsa_verify_certificate_path`` = ``None`` + - (String) HPMSA array SSL certificate path. diff --git a/doc/source/config-reference/tables/cinder-huawei.rst b/doc/source/config-reference/tables/cinder-huawei.rst new file mode 100644 index 00000000000..41c37311bc7 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-huawei.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-huawei: + +.. list-table:: Description of Huawei storage driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``cinder_huawei_conf_file`` = ``/etc/cinder/cinder_huawei_conf.xml`` + - (String) The configuration file for the Cinder Huawei driver. + * - ``hypermetro_devices`` = ``None`` + - (String) The remote device hypermetro will use. + * - ``metro_domain_name`` = ``None`` + - (String) The remote metro device domain name. + * - ``metro_san_address`` = ``None`` + - (String) The remote metro device request url. + * - ``metro_san_password`` = ``None`` + - (String) The remote metro device san password. + * - ``metro_san_user`` = ``None`` + - (String) The remote metro device san user. + * - ``metro_storage_pools`` = ``None`` + - (String) The remote metro device pool names. diff --git a/doc/source/config-reference/tables/cinder-hyperv.rst b/doc/source/config-reference/tables/cinder-hyperv.rst new file mode 100644 index 00000000000..62bd65c3c4d --- /dev/null +++ b/doc/source/config-reference/tables/cinder-hyperv.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-hyperv: + +.. list-table:: Description of HyperV volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[hyperv]** + - + * - ``force_volumeutils_v1`` = ``False`` + - (Boolean) DEPRECATED: Force V1 volume utility class diff --git a/doc/source/config-reference/tables/cinder-ibm_gpfs.rst b/doc/source/config-reference/tables/cinder-ibm_gpfs.rst new file mode 100644 index 00000000000..4bc9b581a47 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-ibm_gpfs.rst @@ -0,0 +1,45 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-ibm_gpfs: + +.. list-table:: Description of Spectrum Scale volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + + * - **[DEFAULT]** + - + + * - ``gpfs_images_dir`` = ``None`` + + - (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. + + * - ``gpfs_images_share_mode`` = ``None`` + + - (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. + + * - ``gpfs_max_clone_depth`` = ``0`` + + - (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. + + * - ``gpfs_mount_point_base`` = ``None`` + + - (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. + + * - ``gpfs_sparse_volumes`` = ``True`` + + - (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. + + * - ``gpfs_storage_pool`` = ``system`` + + - (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. diff --git a/doc/source/config-reference/tables/cinder-ibm_gpfs_nfs.rst b/doc/source/config-reference/tables/cinder-ibm_gpfs_nfs.rst new file mode 100644 index 00000000000..8a936114580 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-ibm_gpfs_nfs.rst @@ -0,0 +1,73 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-ibm_gpfs_nfs: + +.. list-table:: Description of Spectrum Scale NFS volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + + * - **[DEFAULT]** + - + + * - ``gpfs_images_dir`` = ``None`` + + - (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. + + * - ``gpfs_images_share_mode`` = ``None`` + + - (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. + + * - ``gpfs_max_clone_depth`` = ``0`` + + - (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. + + * - ``gpfs_mount_point_base`` = ``None`` + + - (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. + + * - ``gpfs_sparse_volumes`` = ``True`` + + - (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. + + * - ``gpfs_storage_pool`` = ``system`` + + - (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. + + * - ``nas_host`` = + + - (String) IP address or Hostname of NAS system. + + * - ``nas_login`` = ``admin`` + + - (String) User name to connect to NAS system. + + * - ``nas_password`` = + + - (String) Password to connect to NAS system. + + * - ``nas_private_key`` = + + - (String) Filename of private key to use for SSH authentication. + + * - ``nas_ssh_port`` = ``22`` + + - (Port number) SSH port to use to connect to NAS system. + + * - ``nfs_mount_point_base`` = ``$state_path/mnt`` + + - (String) Base dir containing mount points for NFS shares. + + * - ``nfs_shares_config`` = ``/etc/cinder/nfs_shares`` + + - (String) File with the list of available NFS shares. diff --git a/doc/source/config-reference/tables/cinder-ibm_gpfs_remote.rst b/doc/source/config-reference/tables/cinder-ibm_gpfs_remote.rst new file mode 100644 index 00000000000..fa49f2907cc --- /dev/null +++ b/doc/source/config-reference/tables/cinder-ibm_gpfs_remote.rst @@ -0,0 +1,73 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-ibm_gpfs_remote: + +.. list-table:: Description of Spectrum Scale Remote volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + + * - **[DEFAULT]** + - + + * - ``gpfs_hosts`` = + + - (List) Comma-separated list of IP address or hostnames of GPFS nodes. + + * - ``gpfs_hosts_key_file`` = ``$state_path/ssh_known_hosts`` + + - (String) File containing SSH host keys for the gpfs nodes with which driver needs to communicate. Default=$state_path/ssh_known_hosts + + * - ``gpfs_images_dir`` = ``None`` + + - (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. + + * - ``gpfs_images_share_mode`` = ``None`` + + - (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. + + * - ``gpfs_max_clone_depth`` = ``0`` + + - (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. + + * - ``gpfs_mount_point_base`` = ``None`` + + - (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. + + * - ``gpfs_private_key`` = + + - (String) Filename of private key to use for SSH authentication. + + * - ``gpfs_sparse_volumes`` = ``True`` + + - (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. + + * - ``gpfs_ssh_port`` = ``22`` + + - (Port number) SSH port to use. + + * - ``gpfs_storage_pool`` = ``system`` + + - (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. + + * - ``gpfs_strict_host_key_policy`` = ``False`` + + - (Boolean) Option to enable strict gpfs host key checking while connecting to gpfs nodes. Default=False + + * - ``gpfs_user_login`` = ``root`` + + - (String) Username for GPFS nodes. + + * - ``gpfs_user_password`` = + + - (String) Password for GPFS node user. diff --git a/doc/source/config-reference/tables/cinder-ibm_storage.rst b/doc/source/config-reference/tables/cinder-ibm_storage.rst new file mode 100644 index 00000000000..e1f80af0901 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-ibm_storage.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-ibm_storage: + +.. list-table:: Description of IBM Storage driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``ds8k_devadd_unitadd_mapping`` = + - (String) Mapping between IODevice address and unit address. + * - ``ds8k_host_type`` = ``auto`` + - (String) Set to zLinux if your OpenStack version is prior to Liberty and you're connecting to zLinux systems. Otherwise set to auto. Valid values for this parameter are: 'auto', 'AMDLinuxRHEL', 'AMDLinuxSuse', 'AppleOSX', 'Fujitsu', 'Hp', 'HpTru64', 'HpVms', 'LinuxDT', 'LinuxRF', 'LinuxRHEL', 'LinuxSuse', 'Novell', 'SGI', 'SVC', 'SanFsAIX', 'SanFsLinux', 'Sun', 'VMWare', 'Win2000', 'Win2003', 'Win2008', 'Win2012', 'iLinux', 'nSeries', 'pLinux', 'pSeries', 'pSeriesPowerswap', 'zLinux', 'iSeries'. + * - ``ds8k_ssid_prefix`` = ``FF`` + - (String) Set the first two digits of SSID + * - ``proxy`` = ``cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy`` + - (String) Proxy driver that connects to the IBM Storage Array + * - ``san_clustername`` = + - (String) Cluster name to use for creating volumes + * - ``san_ip`` = + - (String) IP address of SAN controller + * - ``san_login`` = ``admin`` + - (String) Username for SAN controller + * - ``san_password`` = + - (String) Password for SAN controller diff --git a/doc/source/config-reference/tables/cinder-images.rst b/doc/source/config-reference/tables/cinder-images.rst new file mode 100644 index 00000000000..62bc2d32a09 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-images.rst @@ -0,0 +1,54 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-images: + +.. list-table:: Description of images configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``allowed_direct_url_schemes`` = + - (List) A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file, cinder]. + * - ``glance_api_insecure`` = ``False`` + - (Boolean) Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed). + * - ``glance_api_servers`` = ``None`` + - (List) A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http. + * - ``glance_api_ssl_compression`` = ``False`` + - (Boolean) Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2. + * - ``glance_api_version`` = ``1`` + - (Integer) Version of the glance API to use + * - ``glance_ca_certificates_file`` = ``None`` + - (String) Location of ca certificates file to use for glance client requests. + * - ``glance_catalog_info`` = ``image:glance:publicURL`` + - (String) Info to match when looking for glance in the service catalog. Format is: separated values of the form: :: - Only used if glance_api_servers are not provided. + * - ``glance_core_properties`` = ``checksum, container_format, disk_format, image_name, image_id, min_disk, min_ram, name, size`` + - (List) Default core properties of image + * - ``glance_num_retries`` = ``0`` + - (Integer) Number retries when downloading an image from glance + * - ``glance_request_timeout`` = ``None`` + - (Integer) http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used. + * - ``image_conversion_dir`` = ``$state_path/conversion`` + - (String) Directory used for temporary storage during image conversion + * - ``image_upload_use_cinder_backend`` = ``False`` + - (Boolean) If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service, and glance_api_version must be set to 2. + * - ``image_upload_use_internal_tenant`` = ``False`` + - (Boolean) If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context's tenant. + * - ``image_volume_cache_enabled`` = ``False`` + - (Boolean) Enable the image volume cache for this backend. + * - ``image_volume_cache_max_count`` = ``0`` + - (Integer) Max number of entries allowed in the image volume cache. 0 => unlimited. + * - ``image_volume_cache_max_size_gb`` = ``0`` + - (Integer) Max size of the image volume cache for this backend in GB. 0 => unlimited. + * - ``use_multipath_for_image_xfer`` = ``False`` + - (Boolean) Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? diff --git a/doc/source/config-reference/tables/cinder-infinidat.rst b/doc/source/config-reference/tables/cinder-infinidat.rst new file mode 100644 index 00000000000..c5529b2818e --- /dev/null +++ b/doc/source/config-reference/tables/cinder-infinidat.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-infinidat: + +.. list-table:: Description of INFINIDAT InfiniBox volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``infinidat_pool_name`` = ``None`` + - (String) Name of the pool from which volumes are allocated diff --git a/doc/source/config-reference/tables/cinder-infortrend.rst b/doc/source/config-reference/tables/cinder-infortrend.rst new file mode 100644 index 00000000000..7319c573aca --- /dev/null +++ b/doc/source/config-reference/tables/cinder-infortrend.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-infortrend: + +.. list-table:: Description of Infortrend volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``infortrend_cli_max_retries`` = ``5`` + - (Integer) Maximum retry time for cli. Default is 5. + * - ``infortrend_cli_path`` = ``/opt/bin/Infortrend/raidcmd_ESDS10.jar`` + - (String) The Infortrend CLI absolute path. By default, it is at /opt/bin/Infortrend/raidcmd_ESDS10.jar + * - ``infortrend_cli_timeout`` = ``30`` + - (Integer) Default timeout for CLI copy operations in minutes. Support: migrate volume, create cloned volume and create volume from snapshot. By Default, it is 30 minutes. + * - ``infortrend_pools_name`` = + - (String) Infortrend raid pool name list. It is separated with comma. + * - ``infortrend_provisioning`` = ``full`` + - (String) Let the volume use specific provisioning. By default, it is the full provisioning. The supported options are full or thin. + * - ``infortrend_slots_a_channels_id`` = ``0,1,2,3,4,5,6,7`` + - (String) Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. By default, it is the channel 0~7. + * - ``infortrend_slots_b_channels_id`` = ``0,1,2,3,4,5,6,7`` + - (String) Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. By default, it is the channel 0~7. + * - ``infortrend_tiering`` = ``0`` + - (String) Let the volume use specific tiering level. By default, it is the level 0. The supported levels are 0,2,3,4. diff --git a/doc/source/config-reference/tables/cinder-kaminario.rst b/doc/source/config-reference/tables/cinder-kaminario.rst new file mode 100644 index 00000000000..321584152fe --- /dev/null +++ b/doc/source/config-reference/tables/cinder-kaminario.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-kaminario: + +.. list-table:: Description of Kaminario volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``auto_calc_max_oversubscription_ratio`` = ``False`` + - (Boolean) K2 driver will calculate max_oversubscription_ratio on setting this option as True. diff --git a/doc/source/config-reference/tables/cinder-lenovo.rst b/doc/source/config-reference/tables/cinder-lenovo.rst new file mode 100644 index 00000000000..f67b0a15425 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-lenovo.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-lenovo: + +.. list-table:: Description of Lenovo volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``lenovo_api_protocol`` = ``https`` + - (String) Lenovo api interface protocol. + * - ``lenovo_backend_name`` = ``A`` + - (String) Pool or Vdisk name to use for volume creation. + * - ``lenovo_backend_type`` = ``virtual`` + - (String) linear (for VDisk) or virtual (for Pool). + * - ``lenovo_iscsi_ips`` = + - (List) List of comma-separated target iSCSI IP addresses. + * - ``lenovo_verify_certificate`` = ``False`` + - (Boolean) Whether to verify Lenovo array SSL certificate. + * - ``lenovo_verify_certificate_path`` = ``None`` + - (String) Lenovo array SSL certificate path. diff --git a/doc/source/config-reference/tables/cinder-lvm.rst b/doc/source/config-reference/tables/cinder-lvm.rst new file mode 100644 index 00000000000..f0fa975458d --- /dev/null +++ b/doc/source/config-reference/tables/cinder-lvm.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-lvm: + +.. list-table:: Description of LVM configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``lvm_conf_file`` = ``/etc/cinder/lvm.conf`` + - (String) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists). + * - ``lvm_max_over_subscription_ratio`` = ``1.0`` + - (Floating point) max_over_subscription_ratio setting for the LVM driver. If set, this takes precedence over the general max_over_subscription_ratio option. If None, the general option is used. + * - ``lvm_mirrors`` = ``0`` + - (Integer) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space + * - ``lvm_suppress_fd_warnings`` = ``False`` + - (Boolean) Suppress leaked file descriptor warnings in LVM commands. + * - ``lvm_type`` = ``default`` + - (String) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. + * - ``volume_group`` = ``cinder-volumes`` + - (String) Name for the VG that will contain exported volumes diff --git a/doc/source/config-reference/tables/cinder-nas.rst b/doc/source/config-reference/tables/cinder-nas.rst new file mode 100644 index 00000000000..4dcfa2da5f2 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nas.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nas: + +.. list-table:: Description of NAS configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nas_host`` = + - (String) IP address or Hostname of NAS system. + * - ``nas_login`` = ``admin`` + - (String) User name to connect to NAS system. + * - ``nas_mount_options`` = ``None`` + - (String) Options used to mount the storage backend file system where Cinder volumes are stored. + * - ``nas_password`` = + - (String) Password to connect to NAS system. + * - ``nas_private_key`` = + - (String) Filename of private key to use for SSH authentication. + * - ``nas_secure_file_operations`` = ``auto`` + - (String) Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. + * - ``nas_secure_file_permissions`` = ``auto`` + - (String) Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. + * - ``nas_share_path`` = + - (String) Path to the share to use for storing Cinder volumes. For example: "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 . + * - ``nas_ssh_port`` = ``22`` + - (Port number) SSH port to use to connect to NAS system. + * - ``nas_volume_prov_type`` = ``thin`` + - (String) Provisioning type that will be used when creating volumes. diff --git a/doc/source/config-reference/tables/cinder-nec_m.rst b/doc/source/config-reference/tables/cinder-nec_m.rst new file mode 100644 index 00000000000..8cc4183f28b --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nec_m.rst @@ -0,0 +1,58 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nec_m: + +.. list-table:: Description of NEC Storage M series driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nec_actual_free_capacity`` = ``False`` + - (Boolean) Return actual free capacity. + * - ``nec_backend_max_ld_count`` = ``1024`` + - (Integer) Maximum number of managing sessions. + * - ``nec_backup_ldname_format`` = ``LX:%s`` + - (String) M-Series Storage LD name format for snapshots. + * - ``nec_backup_pools`` = + - (List) M-Series Storage backup pool number to be used. + * - ``nec_diskarray_name`` = + - (String) Diskarray name of M-Series Storage. + * - ``nec_iscsi_portals_per_cont`` = ``1`` + - (Integer) Number of iSCSI portals. + * - ``nec_ismcli_fip`` = ``None`` + - (IP) FIP address of M-Series Storage iSMCLI. + * - ``nec_ismcli_password`` = + - (String) Password for M-Series Storage iSMCLI. + * - ``nec_ismcli_privkey`` = + - (String) Filename of RSA private key for M-Series Storage iSMCLI. + * - ``nec_ismcli_user`` = + - (String) User name for M-Series Storage iSMCLI. + * - ``nec_ismview_alloptimize`` = ``False`` + - (Boolean) Use legacy iSMCLI command with optimization. + * - ``nec_ismview_dir`` = ``/tmp/nec/cinder`` + - (String) Output path of iSMview file. + * - ``nec_ldname_format`` = ``LX:%s`` + - (String) M-Series Storage LD name format for volumes. + * - ``nec_ldset`` = + - (String) M-Series Storage LD Set name for Compute Node. + * - ``nec_ldset_for_controller_node`` = + - (String) M-Series Storage LD Set name for Controller Node. + * - ``nec_pools`` = + - (List) M-Series Storage pool numbers list to be used. + * - ``nec_queryconfig_view`` = ``False`` + - (Boolean) Use legacy iSMCLI command. + * - ``nec_ssh_pool_port_number`` = ``22`` + - (Integer) Port number of ssh pool. + * - ``nec_unpairthread_timeout`` = ``3600`` + - (Integer) Timeout value of Unpairthread. diff --git a/doc/source/config-reference/tables/cinder-netapp_7mode_iscsi.rst b/doc/source/config-reference/tables/cinder-netapp_7mode_iscsi.rst new file mode 100644 index 00000000000..f1f3c79fd16 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-netapp_7mode_iscsi.rst @@ -0,0 +1,46 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-netapp_7mode_iscsi: + +.. list-table:: Description of NetApp 7-Mode iSCSI driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``netapp_login`` = ``None`` + - (String) Administrative user account name used to access the storage system or proxy server. + * - ``netapp_partner_backend_name`` = ``None`` + - (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. + * - ``netapp_password`` = ``None`` + - (String) Password for the administrative user account specified in the netapp_login option. + * - ``netapp_pool_name_search_pattern`` = ``(.+)`` + - (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. + * - ``netapp_replication_aggregate_map`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... + * - ``netapp_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system or proxy server. + * - ``netapp_server_port`` = ``None`` + - (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. + * - ``netapp_size_multiplier`` = ``1.2`` + - (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. + * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` + - (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. + * - ``netapp_storage_family`` = ``ontap_cluster`` + - (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. + * - ``netapp_storage_protocol`` = ``None`` + - (String) The storage protocol to be used on the data path with the storage system. + * - ``netapp_transport_type`` = ``http`` + - (String) The transport protocol used when communicating with the storage system or proxy server. + * - ``netapp_vfiler`` = ``None`` + - (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. diff --git a/doc/source/config-reference/tables/cinder-netapp_7mode_nfs.rst b/doc/source/config-reference/tables/cinder-netapp_7mode_nfs.rst new file mode 100644 index 00000000000..d9a59ac4aa6 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-netapp_7mode_nfs.rst @@ -0,0 +1,50 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-netapp_7mode_nfs: + +.. list-table:: Description of NetApp 7-Mode NFS driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``expiry_thres_minutes`` = ``720`` + - (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. + * - ``netapp_login`` = ``None`` + - (String) Administrative user account name used to access the storage system or proxy server. + * - ``netapp_partner_backend_name`` = ``None`` + - (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. + * - ``netapp_password`` = ``None`` + - (String) Password for the administrative user account specified in the netapp_login option. + * - ``netapp_pool_name_search_pattern`` = ``(.+)`` + - (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. + * - ``netapp_replication_aggregate_map`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... + * - ``netapp_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system or proxy server. + * - ``netapp_server_port`` = ``None`` + - (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. + * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` + - (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. + * - ``netapp_storage_family`` = ``ontap_cluster`` + - (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. + * - ``netapp_storage_protocol`` = ``None`` + - (String) The storage protocol to be used on the data path with the storage system. + * - ``netapp_transport_type`` = ``http`` + - (String) The transport protocol used when communicating with the storage system or proxy server. + * - ``netapp_vfiler`` = ``None`` + - (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. + * - ``thres_avl_size_perc_start`` = ``20`` + - (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. + * - ``thres_avl_size_perc_stop`` = ``60`` + - (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. diff --git a/doc/source/config-reference/tables/cinder-netapp_cdot_iscsi.rst b/doc/source/config-reference/tables/cinder-netapp_cdot_iscsi.rst new file mode 100644 index 00000000000..84d563f94ba --- /dev/null +++ b/doc/source/config-reference/tables/cinder-netapp_cdot_iscsi.rst @@ -0,0 +1,50 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-netapp_cdot_iscsi: + +.. list-table:: Description of NetApp cDOT iSCSI driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``netapp_login`` = ``None`` + - (String) Administrative user account name used to access the storage system or proxy server. + * - ``netapp_lun_ostype`` = ``None`` + - (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. + * - ``netapp_lun_space_reservation`` = ``enabled`` + - (String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. + * - ``netapp_partner_backend_name`` = ``None`` + - (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. + * - ``netapp_password`` = ``None`` + - (String) Password for the administrative user account specified in the netapp_login option. + * - ``netapp_pool_name_search_pattern`` = ``(.+)`` + - (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. + * - ``netapp_replication_aggregate_map`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... + * - ``netapp_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system or proxy server. + * - ``netapp_server_port`` = ``None`` + - (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. + * - ``netapp_size_multiplier`` = ``1.2`` + - (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. + * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` + - (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. + * - ``netapp_storage_family`` = ``ontap_cluster`` + - (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. + * - ``netapp_storage_protocol`` = ``None`` + - (String) The storage protocol to be used on the data path with the storage system. + * - ``netapp_transport_type`` = ``http`` + - (String) The transport protocol used when communicating with the storage system or proxy server. + * - ``netapp_vserver`` = ``None`` + - (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. diff --git a/doc/source/config-reference/tables/cinder-netapp_cdot_nfs.rst b/doc/source/config-reference/tables/cinder-netapp_cdot_nfs.rst new file mode 100644 index 00000000000..e7e1e145c69 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-netapp_cdot_nfs.rst @@ -0,0 +1,58 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-netapp_cdot_nfs: + +.. list-table:: Description of NetApp cDOT NFS driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``expiry_thres_minutes`` = ``720`` + - (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. + * - ``netapp_copyoffload_tool_path`` = ``None`` + - (String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. + * - ``netapp_host_type`` = ``None`` + - (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. + * - ``netapp_host_type`` = ``None`` + - (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. + * - ``netapp_login`` = ``None`` + - (String) Administrative user account name used to access the storage system or proxy server. + * - ``netapp_lun_ostype`` = ``None`` + - (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. + * - ``netapp_partner_backend_name`` = ``None`` + - (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. + * - ``netapp_password`` = ``None`` + - (String) Password for the administrative user account specified in the netapp_login option. + * - ``netapp_pool_name_search_pattern`` = ``(.+)`` + - (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. + * - ``netapp_replication_aggregate_map`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... + * - ``netapp_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system or proxy server. + * - ``netapp_server_port`` = ``None`` + - (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. + * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` + - (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. + * - ``netapp_storage_family`` = ``ontap_cluster`` + - (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. + * - ``netapp_storage_protocol`` = ``None`` + - (String) The storage protocol to be used on the data path with the storage system. + * - ``netapp_transport_type`` = ``http`` + - (String) The transport protocol used when communicating with the storage system or proxy server. + * - ``netapp_vserver`` = ``None`` + - (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. + * - ``thres_avl_size_perc_start`` = ``20`` + - (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. + * - ``thres_avl_size_perc_stop`` = ``60`` + - (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. diff --git a/doc/source/config-reference/tables/cinder-netapp_eseries_iscsi.rst b/doc/source/config-reference/tables/cinder-netapp_eseries_iscsi.rst new file mode 100644 index 00000000000..5325df62b8c --- /dev/null +++ b/doc/source/config-reference/tables/cinder-netapp_eseries_iscsi.rst @@ -0,0 +1,50 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-netapp_eseries_iscsi: + +.. list-table:: Description of NetApp E-Series driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``netapp_controller_ips`` = ``None`` + - (String) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. + * - ``netapp_enable_multiattach`` = ``False`` + - (Boolean) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host. + * - ``netapp_host_type`` = ``None`` + - (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. + * - ``netapp_login`` = ``None`` + - (String) Administrative user account name used to access the storage system or proxy server. + * - ``netapp_partner_backend_name`` = ``None`` + - (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. + * - ``netapp_password`` = ``None`` + - (String) Password for the administrative user account specified in the netapp_login option. + * - ``netapp_pool_name_search_pattern`` = ``(.+)`` + - (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. + * - ``netapp_replication_aggregate_map`` = ``None`` + - (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... + * - ``netapp_sa_password`` = ``None`` + - (String) Password for the NetApp E-Series storage array. + * - ``netapp_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system or proxy server. + * - ``netapp_server_port`` = ``None`` + - (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. + * - ``netapp_snapmirror_quiesce_timeout`` = ``3600`` + - (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. + * - ``netapp_storage_family`` = ``ontap_cluster`` + - (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. + * - ``netapp_transport_type`` = ``http`` + - (String) The transport protocol used when communicating with the storage system or proxy server. + * - ``netapp_webservice_path`` = ``/devmgr/v2`` + - (String) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. diff --git a/doc/source/config-reference/tables/cinder-nexenta.rst b/doc/source/config-reference/tables/cinder-nexenta.rst new file mode 100644 index 00000000000..4f7ed85128d --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nexenta.rst @@ -0,0 +1,70 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nexenta: + +.. list-table:: Description of Nexenta driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nexenta_blocksize`` = ``4096`` + - (Integer) Block size for datasets + * - ``nexenta_chunksize`` = ``32768`` + - (Integer) NexentaEdge iSCSI LUN object chunk size + * - ``nexenta_client_address`` = + - (String) NexentaEdge iSCSI Gateway client address for non-VIP service + * - ``nexenta_dataset_compression`` = ``on`` + - (String) Compression value for new ZFS folders. + * - ``nexenta_dataset_dedup`` = ``off`` + - (String) Deduplication value for new ZFS folders. + * - ``nexenta_dataset_description`` = + - (String) Human-readable description for the folder. + * - ``nexenta_host`` = + - (String) IP address of Nexenta SA + * - ``nexenta_iscsi_target_portal_port`` = ``3260`` + - (Integer) Nexenta target portal port + * - ``nexenta_mount_point_base`` = ``$state_path/mnt`` + - (String) Base directory that contains NFS share mount points + * - ``nexenta_nbd_symlinks_dir`` = ``/dev/disk/by-path`` + - (String) NexentaEdge logical path of directory to store symbolic links to NBDs + * - ``nexenta_nms_cache_volroot`` = ``True`` + - (Boolean) If set True cache NexentaStor appliance volroot option value. + * - ``nexenta_password`` = ``nexenta`` + - (String) Password to connect to Nexenta SA + * - ``nexenta_rest_port`` = ``0`` + - (Integer) HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used + * - ``nexenta_rest_protocol`` = ``auto`` + - (String) Use http or https for REST connection (default auto) + * - ``nexenta_rrmgr_compression`` = ``0`` + - (Integer) Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression. + * - ``nexenta_rrmgr_connections`` = ``2`` + - (Integer) Number of TCP connections. + * - ``nexenta_rrmgr_tcp_buf_size`` = ``4096`` + - (Integer) TCP Buffer size in KiloBytes. + * - ``nexenta_shares_config`` = ``/etc/cinder/nfs_shares`` + - (String) File with the list of available nfs shares + * - ``nexenta_sparse`` = ``False`` + - (Boolean) Enables or disables the creation of sparse datasets + * - ``nexenta_sparsed_volumes`` = ``True`` + - (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. + * - ``nexenta_target_group_prefix`` = ``cinder/`` + - (String) Prefix for iSCSI target groups on SA + * - ``nexenta_target_prefix`` = ``iqn.1986-03.com.sun:02:cinder-`` + - (String) IQN prefix for iSCSI targets + * - ``nexenta_use_https`` = ``True`` + - (Boolean) Use secure HTTP for REST connection (default True) + * - ``nexenta_user`` = ``admin`` + - (String) User name to connect to Nexenta SA + * - ``nexenta_volume`` = ``cinder`` + - (String) SA Pool that holds all volumes diff --git a/doc/source/config-reference/tables/cinder-nexenta5.rst b/doc/source/config-reference/tables/cinder-nexenta5.rst new file mode 100644 index 00000000000..24cdea38338 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nexenta5.rst @@ -0,0 +1,50 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nexenta5: + +.. list-table:: Description of NexentaStor 5 driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nexenta_dataset_compression`` = ``on`` + - (String) Compression value for new ZFS folders. + * - ``nexenta_dataset_dedup`` = ``off`` + - (String) Deduplication value for new ZFS folders. + * - ``nexenta_dataset_description`` = + - (String) Human-readable description for the folder. + * - ``nexenta_host`` = + - (String) IP address of Nexenta SA + * - ``nexenta_iscsi_target_portal_port`` = ``3260`` + - (Integer) Nexenta target portal port + * - ``nexenta_mount_point_base`` = ``$state_path/mnt`` + - (String) Base directory that contains NFS share mount points + * - ``nexenta_ns5_blocksize`` = ``32`` + - (Integer) Block size for datasets + * - ``nexenta_rest_port`` = ``0`` + - (Integer) HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used + * - ``nexenta_rest_protocol`` = ``auto`` + - (String) Use http or https for REST connection (default auto) + * - ``nexenta_sparse`` = ``False`` + - (Boolean) Enables or disables the creation of sparse datasets + * - ``nexenta_sparsed_volumes`` = ``True`` + - (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. + * - ``nexenta_use_https`` = ``True`` + - (Boolean) Use secure HTTP for REST connection (default True) + * - ``nexenta_user`` = ``admin`` + - (String) User name to connect to Nexenta SA + * - ``nexenta_volume`` = ``cinder`` + - (String) SA Pool that holds all volumes + * - ``nexenta_volume_group`` = ``iscsi`` + - (String) Volume group for ns5 diff --git a/doc/source/config-reference/tables/cinder-nexenta_edge.rst b/doc/source/config-reference/tables/cinder-nexenta_edge.rst new file mode 100644 index 00000000000..6aeb82799e3 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nexenta_edge.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nexenta_edge: + +.. list-table:: Description of NexentaEdge driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nexenta_blocksize`` = ``4096`` + - (Integer) Block size for datasets + * - ``nexenta_chunksize`` = ``32768`` + - (Integer) NexentaEdge iSCSI LUN object chunk size + * - ``nexenta_client_address`` = + - (String) NexentaEdge iSCSI Gateway client address for non-VIP service + * - ``nexenta_iscsi_service`` = + - (String) NexentaEdge iSCSI service name + * - ``nexenta_iscsi_target_portal_port`` = ``3260`` + - (Integer) Nexenta target portal port + * - ``nexenta_lun_container`` = + - (String) NexentaEdge logical path of bucket for LUNs + * - ``nexenta_rest_address`` = + - (String) IP address of NexentaEdge management REST API endpoint + * - ``nexenta_rest_password`` = ``nexenta`` + - (String) Password to connect to NexentaEdge + * - ``nexenta_rest_port`` = ``0`` + - (Integer) HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used + * - ``nexenta_rest_protocol`` = ``auto`` + - (String) Use http or https for REST connection (default auto) + * - ``nexenta_rest_user`` = ``admin`` + - (String) User name to connect to NexentaEdge diff --git a/doc/source/config-reference/tables/cinder-nimble.rst b/doc/source/config-reference/tables/cinder-nimble.rst new file mode 100644 index 00000000000..6c97a74450b --- /dev/null +++ b/doc/source/config-reference/tables/cinder-nimble.rst @@ -0,0 +1,28 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-nimble: + +.. list-table:: Description of Nimble driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nimble_pool_name`` = ``default`` + - (String) Nimble Controller pool name + * - ``nimble_subnet_label`` = ``*`` + - (String) Nimble Subnet Label + * - ``nimble_verify_cert_path`` = ``None`` + - (String) Path to Nimble Array SSL certificate + * - ``nimble_verify_certificate`` = ``False`` + - (String) Whether to verify Nimble SSL Certificate diff --git a/doc/source/config-reference/tables/cinder-osbrick.rst b/doc/source/config-reference/tables/cinder-osbrick.rst new file mode 100644 index 00000000000..0f6bd8a09c1 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-osbrick.rst @@ -0,0 +1,28 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-osbrick: + +.. list-table:: Description of os-brick configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[privsep_osbrick]** + - + * - ``capabilities`` = ``[]`` + - (Unknown) List of Linux capabilities retained by the privsep daemon. + * - ``group`` = ``None`` + - (String) Group that the privsep daemon should run as. + * - ``helper_command`` = ``None`` + - (String) Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. + * - ``user`` = ``None`` + - (String) User that the privsep daemon should run as. diff --git a/doc/source/config-reference/tables/cinder-profiler.rst b/doc/source/config-reference/tables/cinder-profiler.rst new file mode 100644 index 00000000000..05b5965d0a4 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-profiler.rst @@ -0,0 +1,60 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-profiler: + +.. list-table:: Description of profiler configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[profiler]** + - + * - ``connection_string`` = ``messaging://`` + - (String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. + + Examples of possible values: + + * messaging://: use oslo_messaging driver for sending notifications. + + * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. + + * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending notifications. + * - ``enabled`` = ``False`` + - (Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). + + Possible values: + + * True: Enables the feature + + * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. + * - ``es_doc_type`` = ``notification`` + - (String) Document type for notification indexing in elasticsearch. + * - ``es_scroll_size`` = ``10000`` + - (Integer) Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). + * - ``es_scroll_time`` = ``2m`` + - (String) This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. + * - ``hmac_keys`` = ``SECRET_KEY`` + - (String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: [,,...], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. + + Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. + * - ``sentinel_service_name`` = ``mymaster`` + - (String) Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster). + * - ``socket_timeout`` = ``0.1`` + - (Floating point) Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). + * - ``trace_sqlalchemy`` = ``False`` + - (Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won't be traced). + + Possible values: + + * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. + + * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. diff --git a/doc/source/config-reference/tables/cinder-prophetstor_dpl.rst b/doc/source/config-reference/tables/cinder-prophetstor_dpl.rst new file mode 100644 index 00000000000..02f2903ec9f --- /dev/null +++ b/doc/source/config-reference/tables/cinder-prophetstor_dpl.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-prophetstor_dpl: + +.. list-table:: Description of ProphetStor Fibre Channel and iSCSi drivers configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``dpl_pool`` = + - (String) DPL pool uuid in which DPL volumes are stored. + * - ``dpl_port`` = ``8357`` + - (Port number) DPL port number. + * - ``iscsi_port`` = ``3260`` + - (Port number) The port that the iSCSI daemon is listening on + * - ``san_ip`` = + - (String) IP address of SAN controller + * - ``san_login`` = ``admin`` + - (String) Username for SAN controller + * - ``san_password`` = + - (String) Password for SAN controller + * - ``san_thin_provision`` = ``True`` + - (Boolean) Use thin provisioning for SAN volumes? diff --git a/doc/source/config-reference/tables/cinder-pure.rst b/doc/source/config-reference/tables/cinder-pure.rst new file mode 100644 index 00000000000..5b26a99db9c --- /dev/null +++ b/doc/source/config-reference/tables/cinder-pure.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-pure: + +.. list-table:: Description of Pure Storage driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``pure_api_token`` = ``None`` + - (String) REST API authorization token. + * - ``pure_automatic_max_oversubscription_ratio`` = ``True`` + - (Boolean) Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option. + * - ``pure_eradicate_on_delete`` = ``False`` + - (Boolean) When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered. + * - ``pure_replica_interval_default`` = ``900`` + - (Integer) Snapshot replication interval in seconds. + * - ``pure_replica_retention_long_term_default`` = ``7`` + - (Integer) Retain snapshots per day on target for this time (in days.) + * - ``pure_replica_retention_long_term_per_day_default`` = ``3`` + - (Integer) Retain how many snapshots for each day. + * - ``pure_replica_retention_short_term_default`` = ``14400`` + - (Integer) Retain all snapshots on target for this time (in seconds.) diff --git a/doc/source/config-reference/tables/cinder-qnap.rst b/doc/source/config-reference/tables/cinder-qnap.rst new file mode 100644 index 00000000000..0ac8b301208 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-qnap.rst @@ -0,0 +1,26 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-qnap: + +.. list-table:: Description of QNAP storage volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``qnap_management_url`` = ``None`` + - (URI) The URL to management QNAP Storage + * - ``qnap_poolname`` = ``None`` + - (String) The pool name in the QNAP Storage + * - ``qnap_storage_protocol`` = ``iscsi`` + - (String) Communication protocol to access QNAP storage diff --git a/doc/source/config-reference/tables/cinder-quobyte.rst b/doc/source/config-reference/tables/cinder-quobyte.rst new file mode 100644 index 00000000000..613f739680d --- /dev/null +++ b/doc/source/config-reference/tables/cinder-quobyte.rst @@ -0,0 +1,30 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-quobyte: + +.. list-table:: Description of Quobyte USP volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``quobyte_client_cfg`` = ``None`` + - (String) Path to a Quobyte Client configuration file. + * - ``quobyte_mount_point_base`` = ``$state_path/mnt`` + - (String) Base dir containing the mount point for the Quobyte volume. + * - ``quobyte_qcow2_volumes`` = ``True`` + - (Boolean) Create volumes as QCOW2 files rather than raw files. + * - ``quobyte_sparsed_volumes`` = ``True`` + - (Boolean) Create volumes as sparse files which take no space. If set to False, volume is created as regular file.In such case volume creation takes a lot of time. + * - ``quobyte_volume_url`` = ``None`` + - (URI) URL to the Quobyte volume e.g., quobyte:/// diff --git a/doc/source/config-reference/tables/cinder-quota.rst b/doc/source/config-reference/tables/cinder-quota.rst new file mode 100644 index 00000000000..bcf704b92f4 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-quota.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-quota: + +.. list-table:: Description of quota configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``max_age`` = ``0`` + - (Integer) Number of seconds between subsequent usage refreshes + * - ``quota_backup_gigabytes`` = ``1000`` + - (Integer) Total amount of storage, in gigabytes, allowed for backups per project + * - ``quota_backups`` = ``10`` + - (Integer) Number of volume backups allowed per project + * - ``quota_consistencygroups`` = ``10`` + - (Integer) Number of consistencygroups allowed per project + * - ``quota_driver`` = ``cinder.quota.DbQuotaDriver`` + - (String) Default driver to use for quota checks + * - ``quota_gigabytes`` = ``1000`` + - (Integer) Total amount of storage, in gigabytes, allowed for volumes and snapshots per project + * - ``quota_groups`` = ``10`` + - (Integer) Number of groups allowed per project + * - ``quota_snapshots`` = ``10`` + - (Integer) Number of volume snapshots allowed per project + * - ``quota_volumes`` = ``10`` + - (Integer) Number of volumes allowed per project + * - ``reservation_expire`` = ``86400`` + - (Integer) Number of seconds until a reservation expires + * - ``use_default_quota_class`` = ``True`` + - (Boolean) Enables or disables use of default quota class with default quota. diff --git a/doc/source/config-reference/tables/cinder-redis.rst b/doc/source/config-reference/tables/cinder-redis.rst new file mode 100644 index 00000000000..6f2c303a768 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-redis.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-redis: + +.. list-table:: Description of Redis configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[matchmaker_redis]** + - + * - ``check_timeout`` = ``20000`` + - (Integer) Time in ms to wait before the transaction is killed. + * - ``host`` = ``127.0.0.1`` + - (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url + * - ``password`` = + - (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url + * - ``port`` = ``6379`` + - (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url + * - ``sentinel_group_name`` = ``oslo-messaging-zeromq`` + - (String) Redis replica set name. + * - ``sentinel_hosts`` = + - (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url + * - ``socket_timeout`` = ``10000`` + - (Integer) Timeout in ms on blocking socket operations. + * - ``wait_timeout`` = ``2000`` + - (Integer) Time in ms to wait between connection attempts. diff --git a/doc/source/config-reference/tables/cinder-san.rst b/doc/source/config-reference/tables/cinder-san.rst new file mode 100644 index 00000000000..21e7d0566d1 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-san.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-san: + +.. list-table:: Description of SAN configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``san_clustername`` = + - (String) Cluster name to use for creating volumes + * - ``san_ip`` = + - (String) IP address of SAN controller + * - ``san_is_local`` = ``False`` + - (Boolean) Execute commands locally instead of over SSH; use if the volume service is running on the SAN device + * - ``san_login`` = ``admin`` + - (String) Username for SAN controller + * - ``san_password`` = + - (String) Password for SAN controller + * - ``san_private_key`` = + - (String) Filename of private key to use for SSH authentication + * - ``san_ssh_port`` = ``22`` + - (Port number) SSH port to use with SAN + * - ``san_thin_provision`` = ``True`` + - (Boolean) Use thin provisioning for SAN volumes? + * - ``ssh_conn_timeout`` = ``30`` + - (Integer) SSH connection timeout in seconds + * - ``ssh_max_pool_conn`` = ``5`` + - (Integer) Maximum ssh connections in the pool + * - ``ssh_min_pool_conn`` = ``1`` + - (Integer) Minimum ssh connections in the pool diff --git a/doc/source/config-reference/tables/cinder-scality.rst b/doc/source/config-reference/tables/cinder-scality.rst new file mode 100644 index 00000000000..9234579510f --- /dev/null +++ b/doc/source/config-reference/tables/cinder-scality.rst @@ -0,0 +1,26 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-scality: + +.. list-table:: Description of Scality SOFS volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``scality_sofs_config`` = ``None`` + - (String) Path or URL to Scality SOFS configuration file + * - ``scality_sofs_mount_point`` = ``$state_path/scality`` + - (String) Base dir where Scality SOFS shall be mounted + * - ``scality_sofs_volume_dir`` = ``cinder/volumes`` + - (String) Path from Scality SOFS root to volume dir diff --git a/doc/source/config-reference/tables/cinder-scheduler.rst b/doc/source/config-reference/tables/cinder-scheduler.rst new file mode 100644 index 00000000000..14c24e6f91e --- /dev/null +++ b/doc/source/config-reference/tables/cinder-scheduler.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-scheduler: + +.. list-table:: Description of scheduler configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``filter_function`` = ``None`` + - (String) String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. + * - ``goodness_function`` = ``None`` + - (String) String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. + * - ``scheduler_default_filters`` = ``AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter`` + - (List) Which filter class names to use for filtering hosts when not specified in the request. + * - ``scheduler_default_weighers`` = ``CapacityWeigher`` + - (List) Which weigher class names to use for weighing hosts. + * - ``scheduler_driver`` = ``cinder.scheduler.filter_scheduler.FilterScheduler`` + - (String) Default scheduler driver to use + * - ``scheduler_host_manager`` = ``cinder.scheduler.host_manager.HostManager`` + - (String) The scheduler host manager class to use + * - ``scheduler_json_config_location`` = + - (String) Absolute path to scheduler configuration JSON file. + * - ``scheduler_manager`` = ``cinder.scheduler.manager.SchedulerManager`` + - (String) Full class name for the Manager for scheduler + * - ``scheduler_max_attempts`` = ``3`` + - (Integer) Maximum number of attempts to schedule a volume + * - ``scheduler_weight_handler`` = ``cinder.scheduler.weights.OrderedHostWeightHandler`` + - (String) Which handler to use for selecting the host/pool after weighing diff --git a/doc/source/config-reference/tables/cinder-scst.rst b/doc/source/config-reference/tables/cinder-scst.rst new file mode 100644 index 00000000000..a23d0bacc81 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-scst.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-scst: + +.. list-table:: Description of SCST volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``scst_target_driver`` = ``iscsi`` + - (String) SCST target implementation can choose from multiple SCST target drivers. + * - ``scst_target_iqn_name`` = ``None`` + - (String) Certain ISCSI targets have predefined target names, SCST target driver uses this name. diff --git a/doc/source/config-reference/tables/cinder-sheepdog.rst b/doc/source/config-reference/tables/cinder-sheepdog.rst new file mode 100644 index 00000000000..e4465e96e94 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-sheepdog.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-sheepdog: + +.. list-table:: Description of Sheepdog driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``sheepdog_store_address`` = ``127.0.0.1`` + - (String) IP address of sheep daemon. + * - ``sheepdog_store_port`` = ``7000`` + - (Port number) Port of sheep daemon. diff --git a/doc/source/config-reference/tables/cinder-smbfs.rst b/doc/source/config-reference/tables/cinder-smbfs.rst new file mode 100644 index 00000000000..00869cd1dd2 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-smbfs.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-smbfs: + +.. list-table:: Description of Samba volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``smbfs_allocation_info_file_path`` = ``$state_path/allocation_data`` + - (String) The path of the automatically generated file containing information about volume disk space allocation. + * - ``smbfs_default_volume_format`` = ``qcow2`` + - (String) Default format that will be used when creating volumes if no volume format is specified. + * - ``smbfs_mount_options`` = ``noperm,file_mode=0775,dir_mode=0775`` + - (String) Mount options passed to the smbfs client. See mount.cifs man page for details. + * - ``smbfs_mount_point_base`` = ``$state_path/mnt`` + - (String) Base dir containing mount points for smbfs shares. + * - ``smbfs_oversub_ratio`` = ``1.0`` + - (Floating point) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. + * - ``smbfs_shares_config`` = ``/etc/cinder/smbfs_shares`` + - (String) File with the list of available smbfs shares. + * - ``smbfs_sparsed_volumes`` = ``True`` + - (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. + * - ``smbfs_used_ratio`` = ``0.95`` + - (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. diff --git a/doc/source/config-reference/tables/cinder-solidfire.rst b/doc/source/config-reference/tables/cinder-solidfire.rst new file mode 100644 index 00000000000..1ca9ef68ec6 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-solidfire.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-solidfire: + +.. list-table:: Description of SolidFire driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``sf_account_prefix`` = ``None`` + - (String) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix. + * - ``sf_allow_template_caching`` = ``True`` + - (Boolean) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls. + * - ``sf_allow_tenant_qos`` = ``False`` + - (Boolean) Allow tenants to specify QOS on create + * - ``sf_api_port`` = ``443`` + - (Port number) SolidFire API port. Useful if the device api is behind a proxy on a different port. + * - ``sf_emulate_512`` = ``True`` + - (Boolean) Set 512 byte emulation on volume creation; + * - ``sf_enable_vag`` = ``False`` + - (Boolean) Utilize volume access groups on a per-tenant basis. + * - ``sf_enable_volume_mapping`` = ``True`` + - (Boolean) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False. + * - ``sf_svip`` = ``None`` + - (String) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. + * - ``sf_template_account_name`` = ``openstack-vtemplate`` + - (String) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist). + * - ``sf_volume_prefix`` = ``UUID-`` + - (String) Create SolidFire volumes with this prefix. Volume names are of the form . The default is to use a prefix of 'UUID-'. diff --git a/doc/source/config-reference/tables/cinder-storage.rst b/doc/source/config-reference/tables/cinder-storage.rst new file mode 100644 index 00000000000..edaff6bbdd3 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-storage.rst @@ -0,0 +1,80 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-storage: + +.. list-table:: Description of storage configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``allocated_capacity_weight_multiplier`` = ``-1.0`` + - (Floating point) Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread. + * - ``capacity_weight_multiplier`` = ``1.0`` + - (Floating point) Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread. + * - ``enabled_backends`` = ``None`` + - (List) A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options + * - ``iscsi_helper`` = ``tgtadm`` + - (String) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target or fake for testing. + * - ``iscsi_iotype`` = ``fileio`` + - (String) Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device + * - ``iscsi_ip_address`` = ``$my_ip`` + - (String) The IP address that the iSCSI daemon is listening on + * - ``iscsi_port`` = ``3260`` + - (Port number) The port that the iSCSI daemon is listening on + * - ``iscsi_protocol`` = ``iscsi`` + - (String) Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or lioadm target helpers. In order to enable RDMA, this parameter should be set with the value "iser". The supported iSCSI protocol values are "iscsi" and "iser". + * - ``iscsi_target_flags`` = + - (String) Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. + * - ``iscsi_target_prefix`` = ``iqn.2010-10.org.openstack:`` + - (String) Prefix for iSCSI volumes + * - ``iscsi_write_cache`` = ``on`` + - (String) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if iscsi_helper is set to tgtadm. + * - ``iser_helper`` = ``tgtadm`` + - (String) The name of the iSER target user-land tool to use + * - ``iser_ip_address`` = ``$my_ip`` + - (String) The IP address that the iSER daemon is listening on + * - ``iser_port`` = ``3260`` + - (Port number) The port that the iSER daemon is listening on + * - ``iser_target_prefix`` = ``iqn.2010-10.org.openstack:`` + - (String) Prefix for iSER volumes + * - ``migration_create_volume_timeout_secs`` = ``300`` + - (Integer) Timeout for creating the volume to migrate to when performing volume migration (seconds) + * - ``num_iser_scan_tries`` = ``3`` + - (Integer) The maximum number of times to rescan iSER targetto find volume + * - ``num_volume_device_scan_tries`` = ``3`` + - (Integer) The maximum number of times to rescan targets to find volume + * - ``volume_backend_name`` = ``None`` + - (String) The backend name for a given driver implementation + * - ``volume_clear`` = ``zero`` + - (String) Method used to wipe old volumes + * - ``volume_clear_ionice`` = ``None`` + - (String) The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example "-c3" for idle only priority. + * - ``volume_clear_size`` = ``0`` + - (Integer) Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all + * - ``volume_copy_blkio_cgroup_name`` = ``cinder-volume-copy`` + - (String) The blkio cgroup name to be used to limit bandwidth of volume copy + * - ``volume_copy_bps_limit`` = ``0`` + - (Integer) The upper limit of bandwidth of volume copy. 0 => unlimited + * - ``volume_dd_blocksize`` = ``1M`` + - (String) The default block size used when copying/clearing volumes + * - ``volume_driver`` = ``cinder.volume.drivers.lvm.LVMVolumeDriver`` + - (String) Driver to use for volume creation + * - ``volume_manager`` = ``cinder.volume.manager.VolumeManager`` + - (String) Full class name for the Manager for volume + * - ``volume_service_inithost_offload`` = ``False`` + - (Boolean) Offload pending volume delete during volume service startup + * - ``volume_usage_audit_period`` = ``month`` + - (String) Time period for which to generate volume usages. The options are hour, day, month, or year. + * - ``volumes_dir`` = ``$state_path/volumes`` + - (String) Volume configuration file storage directory diff --git a/doc/source/config-reference/tables/cinder-storage_ceph.rst b/doc/source/config-reference/tables/cinder-storage_ceph.rst new file mode 100644 index 00000000000..15529cbce18 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-storage_ceph.rst @@ -0,0 +1,44 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-storage_ceph: + +.. list-table:: Description of Ceph storage configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``rados_connect_timeout`` = ``-1`` + - (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. + * - ``rados_connection_interval`` = ``5`` + - (Integer) Interval value (in seconds) between connection retries to ceph cluster. + * - ``rados_connection_retries`` = ``3`` + - (Integer) Number of retries if connection to ceph cluster failed. + * - ``rbd_ceph_conf`` = + - (String) Path to the ceph configuration file + * - ``rbd_cluster_name`` = ``ceph`` + - (String) The name of ceph cluster + * - ``rbd_flatten_volume_from_snapshot`` = ``False`` + - (Boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot + * - ``rbd_max_clone_depth`` = ``5`` + - (Integer) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. + * - ``rbd_pool`` = ``rbd`` + - (String) The RADOS pool where rbd volumes are stored + * - ``rbd_secret_uuid`` = ``None`` + - (String) The libvirt uuid of the secret for the rbd_user volumes + * - ``rbd_store_chunk_size`` = ``4`` + - (Integer) Volumes will be chunked into objects of this size (in megabytes). + * - ``rbd_user`` = ``None`` + - (String) The RADOS client name for accessing rbd volumes - only set when using cephx authentication + * - ``replication_connect_timeout`` = ``5`` + - (Integer) Timeout value (in seconds) used when connecting to ceph cluster to do a demotion/promotion of volumes. If value < 0, no timeout is set and default librados value is used. diff --git a/doc/source/config-reference/tables/cinder-storage_gpfs.rst b/doc/source/config-reference/tables/cinder-storage_gpfs.rst new file mode 100644 index 00000000000..838266de66e --- /dev/null +++ b/doc/source/config-reference/tables/cinder-storage_gpfs.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-storage_gpfs: + +.. list-table:: Description of GPFS storage configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``gpfs_images_dir`` = ``None`` + - (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. + * - ``gpfs_images_share_mode`` = ``None`` + - (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: "copy" specifies that a full copy of the image is made; "copy_on_write" specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. + * - ``gpfs_max_clone_depth`` = ``0`` + - (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. + * - ``gpfs_mount_point_base`` = ``None`` + - (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. + * - ``gpfs_sparse_volumes`` = ``True`` + - (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. + * - ``gpfs_storage_pool`` = ``system`` + - (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. + * - ``nas_host`` = + - (String) IP address or Hostname of NAS system. + * - ``nas_login`` = ``admin`` + - (String) User name to connect to NAS system. + * - ``nas_password`` = + - (String) Password to connect to NAS system. + * - ``nas_private_key`` = + - (String) Filename of private key to use for SSH authentication. + * - ``nas_ssh_port`` = ``22`` + - (Port number) SSH port to use to connect to NAS system. diff --git a/doc/source/config-reference/tables/cinder-storage_nfs.rst b/doc/source/config-reference/tables/cinder-storage_nfs.rst new file mode 100644 index 00000000000..4f9597a67ed --- /dev/null +++ b/doc/source/config-reference/tables/cinder-storage_nfs.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-storage_nfs: + +.. list-table:: Description of NFS storage configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``nfs_mount_attempts`` = ``3`` + - (Integer) The number of attempts to mount NFS shares before raising an error. At least one attempt will be made to mount an NFS share, regardless of the value specified. + * - ``nfs_mount_options`` = ``None`` + - (String) Mount options passed to the NFS client. See section of the NFS man page for details. + * - ``nfs_mount_point_base`` = ``$state_path/mnt`` + - (String) Base dir containing mount points for NFS shares. + * - ``nfs_qcow2_volumes`` = ``False`` + - (Boolean) Create volumes as QCOW2 files rather than raw files. + * - ``nfs_shares_config`` = ``/etc/cinder/nfs_shares`` + - (String) File with the list of available NFS shares. + * - ``nfs_snapshot_support`` = ``False`` + - (Boolean) Enable support for snapshots on the NFS driver. Platforms using libvirt <1.2.7 will encounter issues with this feature. + * - ``nfs_sparsed_volumes`` = ``True`` + - (Boolean) Create volumes as sparsed files which take no space. If set to False volume is created as regular file. In such case volume creation takes a lot of time. diff --git a/doc/source/config-reference/tables/cinder-storwize.rst b/doc/source/config-reference/tables/cinder-storwize.rst new file mode 100644 index 00000000000..64c2b34519a --- /dev/null +++ b/doc/source/config-reference/tables/cinder-storwize.rst @@ -0,0 +1,64 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-storwize: + +.. list-table:: Description of IBM Storwise driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``san_ip`` = + - (String) IP address of SAN controller + * - ``san_login`` = ``admin`` + - (String) Username for SAN controller + * - ``san_password`` = + - (String) Password for SAN controller + * - ``san_private_key`` = + - (String) Filename of private key to use for SSH authentication + * - ``san_ssh_port`` = ``22`` + - (Port number) SSH port to use with SAN + * - ``storwize_san_secondary_ip`` = ``None`` + - (String) Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. + * - ``storwize_svc_allow_tenant_qos`` = ``False`` + - (Boolean) Allow tenants to specify QOS on create + * - ``storwize_svc_flashcopy_rate`` = ``50`` + - (Integer) Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-100. + * - ``storwize_svc_flashcopy_timeout`` = ``120`` + - (Integer) Maximum number of seconds to wait for FlashCopy to be prepared. + * - ``storwize_svc_iscsi_chap_enabled`` = ``True`` + - (Boolean) Configure CHAP authentication for iSCSI connections (Default: Enabled) + * - ``storwize_svc_multihostmap_enabled`` = ``True`` + - (Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release. + * - ``storwize_svc_multipath_enabled`` = ``False`` + - (Boolean) Connect with multipath (FC only; iSCSI multipath is controlled by Nova) + * - ``storwize_svc_stretched_cluster_partner`` = ``None`` + - (String) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" + * - ``storwize_svc_vol_autoexpand`` = ``True`` + - (Boolean) Storage system autoexpand parameter for volumes (True/False) + * - ``storwize_svc_vol_compression`` = ``False`` + - (Boolean) Storage system compression option for volumes + * - ``storwize_svc_vol_easytier`` = ``True`` + - (Boolean) Enable Easy Tier for volumes + * - ``storwize_svc_vol_grainsize`` = ``256`` + - (Integer) Storage system grain size parameter for volumes (32/64/128/256) + * - ``storwize_svc_vol_iogrp`` = ``0`` + - (Integer) The I/O group in which to allocate volumes + * - ``storwize_svc_vol_nofmtdisk`` = ``False`` + - (Boolean) Specifies that the volume not be formatted during creation. + * - ``storwize_svc_vol_rsize`` = ``2`` + - (Integer) Storage system space-efficiency parameter for volumes (percentage) + * - ``storwize_svc_vol_warning`` = ``0`` + - (Integer) Storage system threshold for volume capacity warnings (percentage) + * - ``storwize_svc_volpool_name`` = ``volpool`` + - (List) Comma separated list of storage system storage pools for volumes. diff --git a/doc/source/config-reference/tables/cinder-swift.rst b/doc/source/config-reference/tables/cinder-swift.rst new file mode 100644 index 00000000000..005409f8d61 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-swift.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-swift: + +.. list-table:: Description of swift configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``backup_swift_auth_insecure`` = ``False`` + - (Boolean) Bypass verification of server certificate when making SSL connection to Swift. + * - ``backup_swift_auth_url`` = ``None`` + - (URI) The URL of the Keystone endpoint diff --git a/doc/source/config-reference/tables/cinder-synology.rst b/doc/source/config-reference/tables/cinder-synology.rst new file mode 100644 index 00000000000..04925f6e3fc --- /dev/null +++ b/doc/source/config-reference/tables/cinder-synology.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-synology: + +.. list-table:: Description of Synology volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``synology_admin_port`` = ``5000`` + - (Port number) Management port for Synology storage. + * - ``synology_device_id`` = ``None`` + - (String) Device id for skip one time password check for logging in Synology storage if OTP is enabled. + * - ``synology_one_time_pass`` = ``None`` + - (String) One time password of administrator for logging in Synology storage if OTP is enabled. + * - ``synology_password`` = + - (String) Password of administrator for logging in Synology storage. + * - ``synology_pool_name`` = + - (String) Volume on Synology storage to be used for creating lun. + * - ``synology_ssl_verify`` = ``True`` + - (Boolean) Do certificate validation or not if $driver_use_ssl is True + * - ``synology_username`` = ``admin`` + - (String) Administrator of Synology storage. diff --git a/doc/source/config-reference/tables/cinder-tegile.rst b/doc/source/config-reference/tables/cinder-tegile.rst new file mode 100644 index 00000000000..a98feb90c57 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-tegile.rst @@ -0,0 +1,24 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-tegile: + +.. list-table:: Description of Tegile volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``tegile_default_pool`` = ``None`` + - (String) Create volumes in this pool + * - ``tegile_default_project`` = ``None`` + - (String) Create volumes in this project diff --git a/doc/source/config-reference/tables/cinder-tintri.rst b/doc/source/config-reference/tables/cinder-tintri.rst new file mode 100644 index 00000000000..19666b204b4 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-tintri.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-tintri: + +.. list-table:: Description of Tintri volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``tintri_api_version`` = ``v310`` + - (String) API version for the storage system + * - ``tintri_image_cache_expiry_days`` = ``30`` + - (Integer) Delete unused image snapshots older than mentioned days + * - ``tintri_image_shares_config`` = ``None`` + - (String) Path to image nfs shares file + * - ``tintri_server_hostname`` = ``None`` + - (String) The hostname (or IP address) for the storage system + * - ``tintri_server_password`` = ``None`` + - (String) Password for the storage system + * - ``tintri_server_username`` = ``None`` + - (String) User name for the storage system diff --git a/doc/source/config-reference/tables/cinder-violin.rst b/doc/source/config-reference/tables/cinder-violin.rst new file mode 100644 index 00000000000..703856e6fdb --- /dev/null +++ b/doc/source/config-reference/tables/cinder-violin.rst @@ -0,0 +1,30 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-violin: + +.. list-table:: Description of Violin volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``violin_dedup_capable_pools`` = + - (List) Storage pools capable of dedup and other luns.(Comma separated list) + * - ``violin_dedup_only_pools`` = + - (List) Storage pools to be used to setup dedup luns only.(Comma separated list) + * - ``violin_iscsi_target_ips`` = + - (List) Target iSCSI addresses to use.(Comma separated list) + * - ``violin_pool_allocation_method`` = ``random`` + - (String) Method of choosing a storage pool for a lun. + * - ``violin_request_timeout`` = ``300`` + - (Integer) Global backend request timeout, in seconds. diff --git a/doc/source/config-reference/tables/cinder-vmware.rst b/doc/source/config-reference/tables/cinder-vmware.rst new file mode 100644 index 00000000000..af041a9ec73 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-vmware.rst @@ -0,0 +1,52 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-vmware: + +.. list-table:: Description of VMware configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``vmware_api_retry_count`` = ``10`` + - (Integer) Number of times VMware vCenter server API must be retried upon connection related issues. + * - ``vmware_ca_file`` = ``None`` + - (String) CA bundle file to use in verifying the vCenter server certificate. + * - ``vmware_cluster_name`` = ``None`` + - (Multi-valued) Name of a vCenter compute cluster where volumes should be created. + * - ``vmware_connection_pool_size`` = ``10`` + - (Integer) Maximum number of connections in http connection pool. + * - ``vmware_host_ip`` = ``None`` + - (String) IP address for connecting to VMware vCenter server. + * - ``vmware_host_password`` = ``None`` + - (String) Password for authenticating with VMware vCenter server. + * - ``vmware_host_port`` = ``443`` + - (Port number) Port number for connecting to VMware vCenter server. + * - ``vmware_host_username`` = ``None`` + - (String) Username for authenticating with VMware vCenter server. + * - ``vmware_host_version`` = ``None`` + - (String) Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. + * - ``vmware_image_transfer_timeout_secs`` = ``7200`` + - (Integer) Timeout in seconds for VMDK volume transfer between Cinder and Glance. + * - ``vmware_insecure`` = ``False`` + - (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if "vmware_ca_file" is set. + * - ``vmware_max_objects_retrieval`` = ``100`` + - (Integer) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. + * - ``vmware_task_poll_interval`` = ``2.0`` + - (Floating point) The interval (in seconds) for polling remote tasks invoked on VMware vCenter server. + * - ``vmware_tmp_dir`` = ``/tmp`` + - (String) Directory where virtual disks are stored during volume backup and restore. + * - ``vmware_volume_folder`` = ``Volumes`` + - (String) Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under "OpenStack/", where project_folder is of format "Project ()". + * - ``vmware_wsdl_location`` = ``None`` + - (String) Optional VIM service WSDL Location e.g http:///vimService.wsdl. Optional over-ride to default location for bug work-arounds. diff --git a/doc/source/config-reference/tables/cinder-vzstorage.rst b/doc/source/config-reference/tables/cinder-vzstorage.rst new file mode 100644 index 00000000000..ee11525e0e9 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-vzstorage.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-vzstorage: + +.. list-table:: Description of Virtuozzo Storage volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``vzstorage_default_volume_format`` = ``raw`` + - (String) Default format that will be used when creating volumes if no volume format is specified. + * - ``vzstorage_mount_options`` = ``None`` + - (List) Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details. + * - ``vzstorage_mount_point_base`` = ``$state_path/mnt`` + - (String) Base dir containing mount points for vzstorage shares. + * - ``vzstorage_shares_config`` = ``/etc/cinder/vzstorage_shares`` + - (String) File with the list of available vzstorage shares. + * - ``vzstorage_sparsed_volumes`` = ``True`` + - (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. + * - ``vzstorage_used_ratio`` = ``0.95`` + - (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. diff --git a/doc/source/config-reference/tables/cinder-windows.rst b/doc/source/config-reference/tables/cinder-windows.rst new file mode 100644 index 00000000000..a263e3e3398 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-windows.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-windows: + +.. list-table:: Description of Windows configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``windows_iscsi_lun_path`` = ``C:\iSCSIVirtualDisks`` + - (String) Path to store VHD backed volumes diff --git a/doc/source/config-reference/tables/cinder-xio.rst b/doc/source/config-reference/tables/cinder-xio.rst new file mode 100644 index 00000000000..0efaaf2323b --- /dev/null +++ b/doc/source/config-reference/tables/cinder-xio.rst @@ -0,0 +1,32 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-xio: + +.. list-table:: Description of X-IO volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``driver_use_ssl`` = ``False`` + - (Boolean) Tell driver to use SSL for connection to backend storage if the driver supports it. + * - ``ise_completion_retries`` = ``30`` + - (Integer) Number on retries to get completion status after issuing a command to ISE. + * - ``ise_connection_retries`` = ``5`` + - (Integer) Number of retries (per port) when establishing connection to ISE management port. + * - ``ise_raid`` = ``1`` + - (Integer) Raid level for ISE volumes. + * - ``ise_retry_interval`` = ``1`` + - (Integer) Interval (secs) between retries. + * - ``ise_storage_pool`` = ``1`` + - (Integer) Default storage pool for volumes. diff --git a/doc/source/config-reference/tables/cinder-zadara.rst b/doc/source/config-reference/tables/cinder-zadara.rst new file mode 100644 index 00000000000..23edf4f1b0a --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zadara.rst @@ -0,0 +1,40 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zadara: + +.. list-table:: Description of Zadara Storage driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``zadara_default_snap_policy`` = ``False`` + - (Boolean) VPSA - Attach snapshot policy for volumes + * - ``zadara_password`` = ``None`` + - (String) VPSA - Password + * - ``zadara_use_iser`` = ``True`` + - (Boolean) VPSA - Use ISER instead of iSCSI + * - ``zadara_user`` = ``None`` + - (String) VPSA - Username + * - ``zadara_vol_encrypt`` = ``False`` + - (Boolean) VPSA - Default encryption policy for volumes + * - ``zadara_vol_name_template`` = ``OS_%s`` + - (String) VPSA - Default template for VPSA volume names + * - ``zadara_vpsa_host`` = ``None`` + - (String) VPSA - Management Host name or IP address + * - ``zadara_vpsa_poolname`` = ``None`` + - (String) VPSA - Storage Pool assigned for volumes + * - ``zadara_vpsa_port`` = ``None`` + - (Port number) VPSA - Port number + * - ``zadara_vpsa_use_ssl`` = ``False`` + - (Boolean) VPSA - Use SSL connection diff --git a/doc/source/config-reference/tables/cinder-zfssa-iscsi.rst b/doc/source/config-reference/tables/cinder-zfssa-iscsi.rst new file mode 100644 index 00000000000..a6b81d915b8 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zfssa-iscsi.rst @@ -0,0 +1,56 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zfssa-iscsi: + +.. list-table:: Description of ZFS Storage Appliance iSCSI driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``zfssa_initiator`` = + - (String) iSCSI initiator IQNs. (comma separated) + * - ``zfssa_initiator_config`` = + - (String) iSCSI initiators configuration. + * - ``zfssa_initiator_group`` = + - (String) iSCSI initiator group. + * - ``zfssa_initiator_password`` = + - (String) Secret of the iSCSI initiator CHAP user. + * - ``zfssa_initiator_user`` = + - (String) iSCSI initiator CHAP user (name). + * - ``zfssa_lun_compression`` = ``off`` + - (String) Data compression. + * - ``zfssa_lun_logbias`` = ``latency`` + - (String) Synchronous write bias. + * - ``zfssa_lun_sparse`` = ``False`` + - (Boolean) Flag to enable sparse (thin-provisioned): True, False. + * - ``zfssa_lun_volblocksize`` = ``8k`` + - (String) Block size. + * - ``zfssa_pool`` = ``None`` + - (String) Storage pool name. + * - ``zfssa_project`` = ``None`` + - (String) Project name. + * - ``zfssa_replication_ip`` = + - (String) IP address used for replication data. (maybe the same as data ip) + * - ``zfssa_rest_timeout`` = ``None`` + - (Integer) REST connection timeout. (seconds) + * - ``zfssa_target_group`` = ``tgt-grp`` + - (String) iSCSI target group name. + * - ``zfssa_target_interfaces`` = ``None`` + - (String) Network interfaces of iSCSI targets. (comma separated) + * - ``zfssa_target_password`` = + - (String) Secret of the iSCSI target CHAP user. + * - ``zfssa_target_portal`` = ``None`` + - (String) iSCSI target portal (Data-IP:Port, w.x.y.z:3260). + * - ``zfssa_target_user`` = + - (String) iSCSI target CHAP user (name). diff --git a/doc/source/config-reference/tables/cinder-zfssa-nfs.rst b/doc/source/config-reference/tables/cinder-zfssa-nfs.rst new file mode 100644 index 00000000000..623e514ac42 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zfssa-nfs.rst @@ -0,0 +1,46 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zfssa-nfs: + +.. list-table:: Description of ZFS Storage Appliance NFS driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``zfssa_cache_directory`` = ``os-cinder-cache`` + - (String) Name of directory inside zfssa_nfs_share where cache volumes are stored. + * - ``zfssa_cache_project`` = ``os-cinder-cache`` + - (String) Name of ZFSSA project where cache volumes are stored. + * - ``zfssa_data_ip`` = ``None`` + - (String) Data path IP address + * - ``zfssa_enable_local_cache`` = ``True`` + - (Boolean) Flag to enable local caching: True, False. + * - ``zfssa_https_port`` = ``443`` + - (String) HTTPS port number + * - ``zfssa_manage_policy`` = ``loose`` + - (String) Driver policy for volume manage. + * - ``zfssa_nfs_mount_options`` = + - (String) Options to be passed while mounting share over nfs + * - ``zfssa_nfs_pool`` = + - (String) Storage pool name. + * - ``zfssa_nfs_project`` = ``NFSProject`` + - (String) Project name. + * - ``zfssa_nfs_share`` = ``nfs_share`` + - (String) Share name. + * - ``zfssa_nfs_share_compression`` = ``off`` + - (String) Data compression. + * - ``zfssa_nfs_share_logbias`` = ``latency`` + - (String) Synchronous write bias-latency, throughput. + * - ``zfssa_rest_timeout`` = ``None`` + - (Integer) REST connection timeout. (seconds) diff --git a/doc/source/config-reference/tables/cinder-zones.rst b/doc/source/config-reference/tables/cinder-zones.rst new file mode 100644 index 00000000000..672af8d33b2 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zones.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zones: + +.. list-table:: Description of zones configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``cloned_volume_same_az`` = ``True`` + - (Boolean) Ensure that the new volumes are the same AZ as snapshot or source volume diff --git a/doc/source/config-reference/tables/cinder-zoning.rst b/doc/source/config-reference/tables/cinder-zoning.rst new file mode 100644 index 00000000000..b9d4520bf4d --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zoning.rst @@ -0,0 +1,34 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zoning: + +.. list-table:: Description of zoning configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``zoning_mode`` = ``None`` + - (String) FC Zoning mode configured + * - **[fc-zone-manager]** + - + * - ``enable_unsupported_driver`` = ``False`` + - (Boolean) Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven't maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. + * - ``fc_fabric_names`` = ``None`` + - (String) Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric + * - ``fc_san_lookup_service`` = ``cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService`` + - (String) FC SAN Lookup Service + * - ``zone_driver`` = ``cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver`` + - (String) FC Zone Driver responsible for zone management + * - ``zoning_policy`` = ``initiator-target`` + - (String) Zoning policy configured by user; valid values include "initiator-target" or "initiator" diff --git a/doc/source/config-reference/tables/cinder-zoning_fabric_brcd.rst b/doc/source/config-reference/tables/cinder-zoning_fabric_brcd.rst new file mode 100644 index 00000000000..79b46d8d257 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zoning_fabric_brcd.rst @@ -0,0 +1,42 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zoning_fabric_brcd: + +.. list-table:: Description of brocade zoning fabrics configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[BRCD_FABRIC_EXAMPLE]** + - + * - ``fc_fabric_address`` = + - (String) Management IP of fabric. + * - ``fc_fabric_password`` = + - (String) Password for user. + * - ``fc_fabric_port`` = ``22`` + - (Port number) Connecting port + * - ``fc_fabric_ssh_cert_path`` = + - (String) Local SSH certificate Path. + * - ``fc_fabric_user`` = + - (String) Fabric user ID. + * - ``fc_southbound_protocol`` = ``HTTP`` + - (String) South bound connector for the fabric. + * - ``fc_virtual_fabric_id`` = ``None`` + - (String) Virtual Fabric ID. + * - ``principal_switch_wwn`` = ``None`` + - (String) DEPRECATED: Principal switch WWN of the fabric. This option is not used anymore. + * - ``zone_activate`` = ``True`` + - (Boolean) Overridden zoning activation state. + * - ``zone_name_prefix`` = ``openstack`` + - (String) Overridden zone name prefix. + * - ``zoning_policy`` = ``initiator-target`` + - (String) Overridden zoning policy. diff --git a/doc/source/config-reference/tables/cinder-zoning_fabric_cisco.rst b/doc/source/config-reference/tables/cinder-zoning_fabric_cisco.rst new file mode 100644 index 00000000000..911f66d5ca2 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zoning_fabric_cisco.rst @@ -0,0 +1,36 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zoning_fabric_cisco: + +.. list-table:: Description of cisco zoning fabrics configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[CISCO_FABRIC_EXAMPLE]** + - + * - ``cisco_fc_fabric_address`` = + - (String) Management IP of fabric + * - ``cisco_fc_fabric_password`` = + - (String) Password for user + * - ``cisco_fc_fabric_port`` = ``22`` + - (Port number) Connecting port + * - ``cisco_fc_fabric_user`` = + - (String) Fabric user ID + * - ``cisco_zone_activate`` = ``True`` + - (Boolean) overridden zoning activation state + * - ``cisco_zone_name_prefix`` = ``None`` + - (String) overridden zone name prefix + * - ``cisco_zoning_policy`` = ``initiator-target`` + - (String) overridden zoning policy + * - ``cisco_zoning_vsan`` = ``None`` + - (String) VSAN of the Fabric diff --git a/doc/source/config-reference/tables/cinder-zoning_manager_brcd.rst b/doc/source/config-reference/tables/cinder-zoning_manager_brcd.rst new file mode 100644 index 00000000000..22a883ebd4f --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zoning_manager_brcd.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zoning_manager_brcd: + +.. list-table:: Description of brocade zoning manager configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[fc-zone-manager]** + - + * - ``brcd_sb_connector`` = ``HTTP`` + - (String) South bound connector for zoning operation diff --git a/doc/source/config-reference/tables/cinder-zoning_manager_cisco.rst b/doc/source/config-reference/tables/cinder-zoning_manager_cisco.rst new file mode 100644 index 00000000000..fbf0324dca5 --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zoning_manager_cisco.rst @@ -0,0 +1,22 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zoning_manager_cisco: + +.. list-table:: Description of cisco zoning manager configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[fc-zone-manager]** + - + * - ``cisco_sb_connector`` = ``cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI`` + - (String) Southbound connector for zoning operation diff --git a/doc/source/config-reference/tables/cinder-zte.rst b/doc/source/config-reference/tables/cinder-zte.rst new file mode 100644 index 00000000000..fa7f0d75ddc --- /dev/null +++ b/doc/source/config-reference/tables/cinder-zte.rst @@ -0,0 +1,52 @@ +.. + Warning: Do not edit this file. It is automatically generated from the + software project's code and your changes will be overwritten. + + The tool to generate this file lives in openstack-doc-tools repository. + + Please make any changes needed in the code, then run the + autogenerate-config-doc tool from the openstack-doc-tools repository, or + ask for help on the documentation mailing list, IRC channel or meeting. + +.. _cinder-zte: + +.. list-table:: Description of Zte volume driver configuration options + :header-rows: 1 + :class: config-ref-table + + * - Configuration option = Default value + - Description + * - **[DEFAULT]** + - + * - ``zteAheadReadSize`` = ``8`` + - (Integer) Cache readahead size. + * - ``zteCachePolicy`` = ``1`` + - (Integer) Cache policy. 0, Write Back; 1, Write Through. + * - ``zteChunkSize`` = ``4`` + - (Integer) Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128, 256, 512. + * - ``zteControllerIP0`` = ``None`` + - (IP) Main controller IP. + * - ``zteControllerIP1`` = ``None`` + - (IP) Slave controller IP. + * - ``zteLocalIP`` = ``None`` + - (IP) Local IP. + * - ``ztePoolVoAllocatedPolicy`` = ``0`` + - (Integer) Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2, Performance Tier First; 3, Capacity Tier First. + * - ``ztePoolVolAlarmStopAllocatedFlag`` = ``0`` + - (Integer) Pool volume alarm stop allocated flag. + * - ``ztePoolVolAlarmThreshold`` = ``0`` + - (Integer) Pool volume alarm threshold. [0, 100] + * - ``ztePoolVolInitAllocatedCapacity`` = ``0`` + - (Integer) Pool volume init allocated Capacity.Unit : KB. + * - ``ztePoolVolIsThin`` = ``False`` + - (Integer) Whether it is a thin volume. + * - ``ztePoolVolMovePolicy`` = ``0`` + - (Integer) Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available; 3, No Relocation. + * - ``zteSSDCacheSwitch`` = ``1`` + - (Integer) SSD cache switch. 0, OFF; 1, ON. + * - ``zteStoragePool`` = + - (List) Pool name list. + * - ``zteUserName`` = ``None`` + - (String) User name. + * - ``zteUserPassword`` = ``None`` + - (String) User password. diff --git a/doc/source/config-reference/tables/manual/cinder-netapp_cdot_extraspecs.rst b/doc/source/config-reference/tables/manual/cinder-netapp_cdot_extraspecs.rst new file mode 100644 index 00000000000..dbe600c03d3 --- /dev/null +++ b/doc/source/config-reference/tables/manual/cinder-netapp_cdot_extraspecs.rst @@ -0,0 +1,68 @@ +.. list-table:: Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP + :header-rows: 1 + + * - Extra spec + - Type + - Description + * - ``netapp_raid_type`` + - String + - Limit the candidate volume list based on one of the following raid + types: ``raid4, raid_dp``. + * - ``netapp_disk_type`` + - String + - Limit the candidate volume list based on one of the following disk + types: ``ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, + XSAS, or SSD.`` + * - ``netapp:qos_policy_group`` [1]_ + - String + - Specify the name of a QoS policy group, which defines measurable Service + Level Objectives, that should be applied to the OpenStack Block Storage + volume at the time of volume creation. Ensure that the QoS policy group + object within Data ONTAP should be defined before an OpenStack Block + Storage volume is created, and that the QoS policy group is not + associated with the destination FlexVol volume. + * - ``netapp_mirrored`` + - Boolean + - Limit the candidate volume list to only the ones that are mirrored on + the storage controller. + * - ``netapp_unmirrored`` [2]_ + - Boolean + - Limit the candidate volume list to only the ones that are not mirrored + on the storage controller. + * - ``netapp_dedup`` + - Boolean + - Limit the candidate volume list to only the ones that have deduplication + enabled on the storage controller. + * - ``netapp_nodedup`` + - Boolean + - Limit the candidate volume list to only the ones that have deduplication + disabled on the storage controller. + * - ``netapp_compression`` + - Boolean + - Limit the candidate volume list to only the ones that have compression + enabled on the storage controller. + * - ``netapp_nocompression`` + - Boolean + - Limit the candidate volume list to only the ones that have compression + disabled on the storage controller. + * - ``netapp_thin_provisioned`` + - Boolean + - Limit the candidate volume list to only the ones that support thin + provisioning on the storage controller. + * - ``netapp_thick_provisioned`` + - Boolean + - Limit the candidate volume list to only the ones that support thick + provisioning on the storage controller. + +.. [1] + Please note that this extra spec has a colon (``:``) in its name + because it is used by the driver to assign the QoS policy group to + the OpenStack Block Storage volume after it has been provisioned. + +.. [2] + In the Juno release, these negative-assertion extra specs are + formally deprecated by the NetApp unified driver. Instead of using + the deprecated negative-assertion extra specs (for example, + ``netapp_unmirrored``) with a value of ``true``, use the + corresponding positive-assertion extra spec (for example, + ``netapp_mirrored``) with a value of ``false``. diff --git a/doc/source/index.rst b/doc/source/index.rst index 58ac485fbaa..cab54487433 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -88,12 +88,13 @@ API Extensions Go to http://api.openstack.org for information about Cinder API extensions. -Sample Configuration File -========================= +Configuration Reference +======================= .. toctree:: :maxdepth: 1 + config-reference/block-storage sample_config Indices and tables