e1d93531b9
Currently when cinder failover is invoked, due to the primary storage backend being down, it is not possible, through the driver, to create a new volume with sync replication functionality. Non-replicated and async replicated volumes can be created in this scenario - although not recommended due to potential issues after failback. A synchronously replicated volume could be safely created during failover as the Pure Storage architecture can allow this to happen. When the failed array is available again, any new sync replication volumes created during the outage will be automatically recovered by the backend's own internal systems. This patch updates the driver to check, during volume creation, if the backend is in failover mode and then allow sync volumes to be correctly created, even though the primary array could be inaccessible. Sync volume attachment will also be allowed to continue should one of the backend replica pair arrays be down. Creating different replication volume types have been tested both failover and failback scenarios in Pure's labs and this patch has proved to work as expected. Additionally included is work from abandoned change I7ed3ebd7fec389870edad0c1cc07ac553854dd8a, which resolves replication issues in A/A deployments. Also, fixes bug where a deleted replication pod can cause the driver to fail on restart. Closes-Bug: #2035404 Change-Id: I58f0f10b63431896e7532b16b561683cd242e9ee
12 lines
417 B
YAML
12 lines
417 B
YAML
---
|
|
features:
|
|
- |
|
|
Pure Storage driver: Allow synchronously replicated volumes
|
|
to be created during a replication failover event. These will
|
|
remain viable volumes when the replication is failed back to
|
|
its original state.
|
|
fixes:
|
|
- |
|
|
[Pure Storage] `Bug #2035404 <https://bugs.launchpad.net/cinder/+bug/2035404>`_:
|
|
Fixed issue with missing replication pod causing driver to fail on restart.
|