Multi-attach has been supported by the InfiniBox for a long time now.
As it is now supported by Cinder, this commit enables this capability
for the driver for attachments done either through Fibre Channel or
through iSCSI.
Change-Id: Ic84eb3d88cc2130192434b3b49e0e53c2717c6b0
The RBD driver and almost all backup drivers rely heavily on eventlet's
tpool, which has a default of 20 threads, which will be too low.
Currently the only way to change this is using the environmental
variable EVENTLET_THREADPOOL_SIZE, which isn't very clean for the
openstack services.
This patch adds the possibility of setting specific values for each
backend and for the backup service, and increases the default for the
backup service from 20 threads to 60.
The backup service can be configured under [DEFAULT] section with option
backup_native_threads_pool_size, and the backends under their specific
sections with backend_native_threads_pool_size.
Change-Id: I5d7c1e8d7f1c6592ded1f74eea42d76ab523df92
Closes-Bug: #1754354
Backups can be cancelled by force deleting the in-progress backup, but
there is no mechanism to delete the restoring backups.
We will now monitor the status of the backup and on any status change
will abort the restoration and change the volume status to error.
The reason to diverge from the backup creation cancellation
implementation (deleting the new resource) is that the restoring may be
done into an already existing volume and deleting it is not feasable for
the user, for example when an instance is booted from a Cinder volume
and there's a license linked to the System ID.
Cinder backup drivers may now raise BackupRestoreCancel in `restore`
method when a restoring operation is cancelled.
Change-Id: If2f143d7ba56ae2d74b3bb571237cc053f63054e
When a cinder volume is created the ontap backend fits the LUN to the
best geometry. It makes the image uploaded from a volume larger than
expected and a volume of larger size must be created from that image.
Using the use-exact-size parameter the backend does not fit to best
geometry, the image uploaded from that volume has the same size and a
volume of the same size may be created from that image.
Nevertheless this parameter is available only in Data ONTAP 9.1 (ontapi
version 1.110) and later.
Closes-Bug: #1731474
Change-Id: I0e21cbcb6effa1e72999580564099976511ca4a9
Added dell_api_async_rest_timeout and dell_api_sync_rest_timeout to allow
setting of async and sync timeouts for the Dell EMC SC REST API.
The user should generally not set these. They should be set only when
instructed by support.
Fixed a couple of comments.
Updated documentation.
Change-Id: Id8fd27d83e2f97070f67523c9c2d8c59f66e6caa
One of our release notes was accidentally put into
a releasenotes/notes directory under releasenotes/notes.
This moves it to the right location and removes the
extra directories.
Change-Id: Idd7bd9ae499a477c08a06ed38e2bbe922ec0c955
Collecting stats for provisioned_capacity_gb takes a long time since we
have to query each individual image for the provisioned size. If we are
using the pool just for Cinder and/or are willing to accept a potential
deviation in Cinder stats we could just not retrieve this information
and calculate this based on the DB information for the volumes.
This patch adds configuration option `rbd_exclusive_cinder_pool` that
allows us to disable the size collection and thus improve the stats
reporting speed.
Change-Id: I32c7746fa9149bce6cdec96ee9aa87b303de4271
Closes-Bug: #1704106
This allows new values in QoS specs:
- read_bytes_sec_per_gb
- write_bytes_sec_per_gb
- total_bytes_sec_per_gb
The bytes value specifed in the QoS spec is multiplied
by the size of the volume in initialize_connection,
and the result is passed along like a standard <x>_bytes_sec
QoS value.
Change-Id: Iafc22d37c4c50515d9f6cf1144ea25847c90f75d
When we migrate, or retype with migration, a volume between availability
zones we are setting the AZ of the migrated volume to the same AZ as the
source volume, so we end up with a volume where the AZ does not match
the backend's AZ.
This patch fixes this issue for the generic migration code as well as
the optimized driver migration.
Change-Id: Ia1bf3ae0b7cd6d209354024848aef6029179d8e4
Closes-Bug: #1747949
This was added a long long long long time ago, but it was never
fully implemented and is not used anywhere. It might be worth
resurrecting and having, but it should probably be a first class
API rather than an extension, and if nothing else it would be good
if we want this to create a spec and have a real plan on it's
implementation and usage.
Bottom line, it's not used anywhere and the implementation is not
complete. We could probably remove it safely, but let's deprecate
it and fast track removal and possible replacement next cycle.
Change-Id: I1a1920d141c8c32a8fb30bc6f73e955a1a1c5150
This patch makes a few small changes that are required in order to
have the Cinder Backup service working on Windows.
- all physial disks must be open in byte mode. 'rb+' must be used
when writing.
- reading passed the disk size boundary will not return an empty
string, raising an IOError instead. For this reason, we're avoiding
doing it.
- we ensure that the chunk size is a multiple of the sector size.
- the chmod command is not available on Windows. Although changing
owners is possible, it is not needed. For this reason, the
'temporary_chown' helper will be a noop on Windows. It's easier to
do it here rather than do platform checks wherever this gets called.
- when connecting the volumes, we pass the 'expect_raw_disk' argument,
which provides a hint to the connector about what we expect. This
allows the SMBFS connector to return a mounted raw disk path instead
of a virtual image path.
- when the driver provides temporary snapshots to be used during
the backup process, the API is bypassed. For this reason, we need to
ensure that the snapshot state and progress gets updated accordingly.
Otherwise, this breaks the nova assisted snapshot workflow.
We're doing platform checks, ensuring that we don't break/change
the current workflow.
The Swift and Posix backup drivers are known to be working on Windows.
Implements: blueprint windows-smb-backup
Depends-On: #I20f791482fb0912772fa62d2949fa5becaec5675
Change-Id: I8769a135974240fdf7cebd4b6d74aaa439ba1f27
When aborting a backup on any chunked driver we will be leaving chunks
in the backend without Cinder knowing so and with no way of deleting
them from Cinder. In this case the only way to delete them is going to
the storage itself and deleting them manually.
Another issue that will happen if we are using a temporary resource for
the backup, be it a volume or a snapshot, is that it will not be cleaned
up and will be left for us to manually issue the delete through the
Cinder API.
The first issue is caused by the chunked driver's assumption that the
`refresh` method in an OVO will ignore the context's `read_deleted`
configuration and always read the record, which is not true. And since
it doesn't work when the record is deleted there will be leftovers if
the status of the backup transitions to deleted during the processing of
a chunk.
The second issue is caused by the same thing, but in this case is when
the backup manager refreshes the backup OVO to know the temporary
resource it needs to clean up.
This patches fixes the incorrect behavior of the backup abort mechanism
to prevent leaving things behind.
Closes-Bug: #1746559
Change-Id: Idcfdbf815f404982d26618710a291054f19be736
This change will allow snapshots created by the SMBFS driver to be
attached in read-only mode. This will be used in order to backup
in-use volumes.
Note that writing to a snapshot would corrupt the differencing image
chain.
We'll need to locate snapshot backing files. The issue is that
in some cases, in-place locks may prevent us from querying the top
image from the chain. For this reason, we're going to store the
backing file information in the DB using the snapshot object
metadata field, at the same time preserving backwards compatibility.
Fake volume/snapshot object usage throughout the smbfs driver unit
tests had to be updated due to the object changes performed by it.
Partial-Implements: bleuprint windows-smb-backup
Change-Id: Ideaacbf9d160f400bef53825103b671127252789
As described in the launchpad bug [1], backup operations must take care
to ensure encryption key ID resources aren't lost, and that restored
volumes always have a unique encryption key ID.
[1] https://bugs.launchpad.net/cinder/+bug/1745180
This patch adds an 'encryption_key_id' column to the backups table. Now,
when a backup is created and the source volume's encryption key is
cloned, the cloned key ID is stored in the table. This makes it possible
to delete the cloned key ID when the backup is deleted. The code that
clones the volume's encryption key has been relocated from the common
backup driver layer to the backup manager. The backup manager now has
full responsibility for managing encryption key IDs.
When restoring a backup of an encrypted volume, the backup manager now
does this:
1) If the restored volume's encryption key ID has changed, delete the
key ID it had prior to the restore operation. This ensures no key IDs
are leaked.
2) If the 'encryption_key_id' field in the backup table is empty, glean
the backup's cloned key ID from the backup's "volume base metadata."
This helps populate the 'encryption_key_id' column for backup table
entries created prior to when the column existed.
3) Re-clone the backup's key ID to ensure the restored volume's key ID
is always unique.
Closes-Bug: #1745180
Change-Id: I6cadcbf839d146b2fd57d7019f73dce303f9e10b
This is part of the effort to improve Cinder's Thin provisioning
support. As some operators have been facing problems to determinte
what is the best value for the max_over_subscription_ratio, we
add in this patch a mechanism to automatically calculate this value.
The formula used for calculation is:
max_over_subscription_ratio = 20 if provisioned_capacity_gb == 0 else:
max_over_subscription_ratio = 1 + (provisioned_capacity_gb/(
total_capacity_gb - free_capacity_gb + 1))
Using this formula, the scheduler will allow the creation of a much
bigger number of volumes at the begginning of the pool's life, and
start to restrict the creation as the free space approaces to 0 or
the reserved limit.
Drivers now can set max_over_subscription_ratio = 'auto' and take
benefit of the change. Drivers that somehow use the
max_over_subscription_ratio inside the driver to do any kind of
calculations are incompatible with this new feature and should
get fixed in order to be able to use the feature.
Implements: bp provisioning-improvements
Change-Id: If30bb6276f58532c0f78ac268544a8804008770e
The policy rules pointed out in this release note were using
variables from the code, not the actual policy rule names that
are actually used in code and would be overridden in the policy
file.
Change-Id: I00ea8702327f5ad5083f97182098346093dd00ee
When creating an encrypted RBD volume, initialize
LUKS on the volume using the volume's encryption key.
This is required because os-brick only handles this
step for volumes that attach via block devices.
This requires qemu-img 2.10.
Co-Authored-By: Lee Yarwood <lyarwood@redhat.com>
Related-Bug: #1463525
Implements: blueprint libvirt-qemu-native-luks
Change-Id: Id02130e9af8bdf90a712968916017d05c3213c32
This was the first Cinder Volume driver available on Windows,
for which reason it was simply called 'WindowsDriver'.
As we've added another driver available on Windows, the SMB driver,
this has caused quite some confusion.
For this reason, we're now renaming it to 'WindowsISCSIDriver'.
The new location will be:
cinder.volume.drivers.windows.iscsi.WindowsISCSIDriver
Change-Id: I3877491463dce3d46f7ac0e194ffdf46a0e7c84c
This patch adds the policy for creating and retyping volumes
with volume types that indicate multi attach capabilities.
There are two policies being added:
1. MULTIATTACH_POLICY
General policy to disallow creating volumes of type multiattach
as well as retyping volumes to/from a multiattach type.
2. MULTIATTACH_BOOTABLE_VOLUME_POLICY
Specific policy to disallow creating multiple attachments for
any volume that is `bootable`.
We use policy to control the use of this particular type,
and we also limit the ability to retype a volumes
multi attach settings to only volumes that are in available
status. We need to to do this because multi attach has
implications for things on the Compute side (ie libvirt)
that would require detach/reattach to keep things synced
correctly.
Currently there's no back end reporting the `multiattach=True`
capability, that will be in the next patch of this series.
Change-Id: I3fd8afe9cbae3c733a6530dce7be6fef8d53cfa6
blueprint: multi-attach-v3-attach