IBM Storwize drivers are currently reporting
allocated_capacity_gb. Drivers should not report
this and let Cinder core take care of this value.
Closes-Bug: #1746223
Change-Id: I1c1600edb7c1323e8ad7396b3e5dc4652b04f9f2
The base target driver class is missing the extend_target method which
is called from the LVM driver.
Since it was missing there some drivers, such as the nvmeof base class,
also missed it when they were implemented.
This patch adds a base implementation that does nothing, since this is
usually the right thing to do for most drivers.
Change-Id: If009af0a385d9203bd74e42e822dca299a119ca7
The nvmet target driver only supports single portals, which was all that
was available back on the original implementation, but now that it
supports the new connection information format it can provide
multiple portals.
This patch adds support to provide multiple portals when attaching a new
volume, that way os-brick can try the different portals when connecting a
volume until it finds one that works, making it more robust.
Thanks to this features it will also enable multipathing automatically
(without additional changes) once the NVMe-oF os-brick connector
supports it.
Since the new connection information format is necessary to pass
multiple portals it requires that the configuration option
``nvmeof_conn_info_version`` is set to ``2``.
The patch also deprecates the ``iscsi_secondary_ip_addresses``
configuration option in favor of the new
``target_secondary_ip_addresses``. This is something we already did a
while back for ``iscsi_ip_address`` which was renamed in the same way to
``target_ip_address``.
Change-Id: Iccfbe62406b6202446e974487e0f91465a5d0fa3
This patch adds Global-Active-Device("GAD")(*) volume support for
Hitachi VSP driver.
New properties will be added in configuration:
hbsd:topology sets to "active_active_mirror_volume" would specify a GAD
volume.
hitachi_mirror_xxx parameters would specify a secondary storage for GAD
volume.
(*) GAD is one of Hitachi storage product.
It can use volume replication to provide a HA environment for hosts
across systems and sites.
Implements: blueprint hitachi-gad-support
Change-Id: I4543cd036897b4db8b04011b808dd5af34439153
This patch is to add an configuration option which specifies
hostgroup(or iSCSI target) name format.
Implements: blueprint hitachi-vsp-add-hostgroup-name-format-option
Change-Id: Icf3c8dc4ba2fd96cda01d778e3a49406fec3b9db
This patch adds a new field for the `get pools` command to show
what replication capability the backend array currently has.
This is based on the status of current array connections in the
backend array.
Response will be `async`, `sync`. `sync` or `trisync`.
`trisync` implies support for `sync` and `async`.
`sync` implies support for `async`.
Change-Id: I46cbb986ed73335270d6dd4ad197197648b55506
This patch update retype operation to different pool and support storage
assisted migration.
Storage assisted migration feature is also used when retype a volume,
which doesn't have any snapshots, to different pool.
Implements: blueprint hitachi-vsp-update-retype
Change-Id: I1f992f7986652098656662bf129b1dd8427ac694
[Spectrum Virtualize family] As part of Flashcopy 2.0 implementation,
added configuration parameter 'storwize_volume_group' to support volume
group feature.
Implements: blueprint ibm-svf-volumegroup
Change-Id: If9ee94815bb257fb24bfcfca2bee9e64dd499636
IBM branding name 'Spectrum Virtualize family' is changed
to 'Storage Virtualize family'. Made the branding name change at
all the places in documentation.
Also, Corrected the hyperlink for 'IBM Documentation' by removing
"en" in the link. Instead of making english as default, the link
will redirect based on the browser regional language settings.
Change-Id: Icd668ee471e6b2e33b20f6abcfb7510a4e5e79bb
This release of tooz contains
54448e9d Replace md5 with oslo version
which is needed for FIPS support.
Change-Id: I506968a245afa2a17be343d1e923d21f0b298cd7
This is a followup to 78f8a7bbe6eb12cd2b249eec456a00642181d8af
to correct typo of 'externals' to 'external'.
Change-Id: I88acb611479cbcca541d62e7e16e0b82e159ecd6
Our current migrations unit tests are NOT doing the migrations in the DB
they should.
Migrations are being run in whatever was last used for the normal cinder
UTs, which is SQLite.
This creates some issues such as:
- Migrations can not be run independently, because no "normal cinder UT"
has been run before, so there is no DB that the migration can use.
This is easy to test:
$ . .tox/py310/bin/activate
$ stestr run -n cinder.tests.unit.db.test_migrations
That will fail with oslo_db.exception.DBNonExistentDatabase because no
previous test has created the sqlite DB.
If we were to run any other UT before them they would work.
- Migrations that run conditional code based on the DB Engine will fail,
even when running things conditionally.
For example a migration that uses `index_exists` that only works on
MySQL will fail even when doing it conditionally and only running it
on MySQL, because it will actually be run in SQLite:
is_mysql = engine.dialect.name == 'mysql'
idx_name = f'{table}_deleted_project_id_idx' +
if is_mysql and utils.index_exists(engine, table, idx_name):
This patch fixes this by making sure the get_engine method in Cinder
returns the same engine that the migration code is using.
Change-Id: I15f6e4bd180e9a5af82c76d61658a3cb1eac22c8
LVM target drivers usually only support unique subsystems/targets, so a
specific subsystem/target is created for each volume.
While this is good from a deployment point of view, it is insufficient
from a testing perspective, since it limits the code paths that can be
tested in os-brick.
Being able to test these 2 different paths in os-brick is very
important, because the shared case usually present very particular
issues: Leftover devices caused by race conditions between nova and
cinder, premature subsystem/target disconnection, not disconnecting
subsystem/target, etc.
Thanks to this patch we'll be able to increase the testing possibilities
of the NVMe-oF os-brick connector to cover combinations of:
- Different connection properties formats: old & new
- Different target sharing: shared & non shared
Change-Id: I396db66f72fbf1f31f279d4431c64c9004a1a665
The LVM driver assumes that all connecting hosts will have the iSCSI
initiator installed and configured. If they don't, then there won't be
an "initiator" key in the connector properties dictionary and the call
to terminate connection will always fail with a KeyError exception on
the 'initiator' key.
This is the case if we don't have iSCSI configured on the computes
because we are only using NVMe-oF volumes with the nvmet target.
This patch starts using the dictionary ``get`` method so there is no
failure even when the keys don't exist, and it also differentiates by
target type so they target the identifier they care about, which is the
``initiator`` for iSCSI and ``nqn`` for NVMe-oF.
Closes-Bug: #1966513
Related-Bug: #1786327
Change-Id: Ie967a42188bd020178cb7af527e3dd3ab8975a3d
The os-brick nvmeof connector supports 2 different connection
properties formats, the original one and a later one that added support
to multiple portals as well as replicated devices for a software RAID.
Targets that inherit from the nvmeof base target (spdk and nvmet)
supported the original connection properties format, but it was decided
that new features, such as multipathing, would only be added to os-brick
in the new format code path.
This patch adds support to the nvmeof target class (and specifically to
the nvmet target, though it should work for other as well) for the new
connection properties format to enable support for future features.
Support for the old connection properties has been maintained, and is
still the default, since we need an easy way to exert the old code path
in os-brick to ensure that the code still works.
Configuration option ``nvmeof_conn_info_version`` is used to select what
version of the connection properties the nvmet target should use. With
version 1 being the old format and version 2 the new format. Defaults to
the old format to preserve existing behavior.
Change-Id: If3f7f66a5cd23604cc81a6973304db9f9018fdb3
On ghange Icae9802713867fa148bc041c86beb010086dacc9 we changed from
using the nvmet CLI interface to using it as a Python library.
In that change we incorrectly wrote the ``setup`` methods signature and
they are all missing the ``err_func`` parameter. It is not failing
because that's on the non-privileged side of things, and then on the
privileged side it forcefully adds the parameter on the call to the
actual library.
This patch adds the missing parameter and handles it on the
non-privileged side.
Change-Id: I615497616d87dfc1683977feafcfbfb9fab8e248