This change adds basic deployment based on Podman
container manager as an alternative to Docker.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Signed-off-by: Martin Hiner <m.hiner@partner.samsung.com>
Signed-off-by: Petr Tuma <p.tuma@partner.samsung.com>
Change-Id: I2b52964906ba8b19b8b1098717b9423ab954fa3d
Depends-On: Ie4b4c1cf8fe6e7ce41eaa703b423dedcb41e3afc
Use case: exposing single external https frontend and
load balancing services using FQDNs.
Support different ports for internal and external endpoints.
Introduced kolla_url filter to normalize urls like:
- https://magnum.external:443/v1
- http://magnum.external:80/v1
Change-Id: I9fb03fe1cebce5c7198d523e015280c69f139cd0
Co-Authored-By: Jakub Darmach <jakub@stackhpc.com>
This patch is adding a feature for an option to copy different
ceph configuration files and corresponding keyrings for cinder,
glance, manila, gnocchi and nova services.
This is especially useful when the deployment uses availability
zones as below example.
- Individual compute can read/write to individual ceph
cluster in same AZ.
- Cinder can write to several ceph clusters in several AZs.
- Glance can use multistore and upload images to
several ceph clusters in several AZs at once.
Change-Id: Ie4d8ab5a3df748137835cae1c943b9180cd10eb1
As of I3629b84d3255a8fe9d8a7cea8c6131d7c40899e8 nova
now requires the service_user section to be configured
to address CVE-2023-2088. This change adds
the service user section to the nova.conf template in
the nova and nova-cell roles.
Related-Bug: #2004555
Signed-off-by: Sven Kieske <kieske@osism.tech>
Change-Id: I2189dafca070accfd8efcd4b8cc4221c6decdc9f
(cherry picked from commit a77ea13ef1991543df29b7eea14b1f91ef26f858)
(cherry picked from commit 03c12abbcc107bfec451f4558bc97d14facae01c)
(cherry picked from commit cb105dc293ff1cdb11ab63fa3e3bf39fd17e0ee0)
(cherry picked from commit efe6650d09441b02cf93738a94a59723d84c5b19)
deployments
This allows services to work with etcd when coordination is enabled
for TLS internal deployments. Without this fix, we fail to connect to
etcd with the coordination backend and the service itself crashes.
Change-Id: I0c1d6b87e663e48c15a846a2774b0a4531a3ca68
A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
When running in check mode, some prechecks previously failed because
they use the command module which is silently not run in check mode.
Other prechecks were not running correctly in check mode due to e.g.
looking for a string in empty command output or not querying which
containers are running.
This change fixes these issues.
Closes-Bug: #2002657
Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
Regularly, we experience issues in Kolla Ansible deployments because we
use wrong options in OpenStack configuration files. This is because
OpenStack services ignore unknown options. We also need to keep on top
of deprecated options that may be removed in the future. Integrating
oslo-config-validator into Kolla Ansible will greatly help.
Adds a shared role to run oslo-config-validator on each service. Takes
into account that services have multiple containers, and these may also
use multiple config files. Service roles are extended to use this shared
role. Executed with the new command ``kolla-ansible validate-config``.
Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
From OpenStack Zed the Pure Storage Cinder driver supports
NVMe-RoCE as a dataplane protocol. This patch adds support
for this new driver type.
Also amend a couple of documentation formatting typos.
Change-Id: Ic1eed7d19e9b583e22419625c92ac3507ea4614d
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
THis change adds container_engine to module parameters
so when we introduce podman, kolla_toolbox can be used
for both engines.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: Ic2093aa9341a0cb36df8f340cf290d62437504ad
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
This change adds container_engine variable to kolla_container_facts
module, this prepares module to be used with docker and podman as well
without further changes in roles.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: I9e8fa30646844ab4a288555f3aafdda345b3a118
By resetting image_upload_use_cinder_backend to upstream default.
When uploading volume to glance image, cinder looks at the backend's
image_upload_use_cinder_backend config knob to decide whether to try link
the glance image to a cloned volume made by cinder, i.e. by doing all work
locally and only updating glance's locations for the image (when the knob
is set to True). However, after all [1], [2] and [3], which happens since
Victoria, this option requires further config from user (using volume type
with image_service:store_id property (aka extra spec) set to the desired
glance store (even if there is only one cinder store configured).
Please read the bug report as to why the option removal is the
best option (TL;DR it is the most compatible approach).
[1] https://review.opendev.org/c/openstack/kolla-ansible/+/708114
[2] https://review.opendev.org/c/openstack/glance_store/+/746556
[3] https://review.opendev.org/c/openstack/cinder/+/661676
Closes-Bug: #1991516
Change-Id: Ife87ee0241d907a0c407eb21811a354ed1734408
This allows you to use a more descriptive name if you desire.
For example, when using cinder with multiple ceph backends, rbd-1,
doesn't convey much information. You could include location, disk
technology, etc. in the name.
Change-Id: Icfdc2e5726fec8b645d6c2c63391a13c31f2ce9a
This patch adds loadbalancer-config role
which is "wrapper" around haproxy-config
and proxysql-config role which will be added
in follow-up patches.
Change-Id: I64d41507317081e1860a94b9481a85c8d400797d
This reverts commit 73fc230fe3f1d159b5bb9d62a6e15f93cecb6e7c.
Reason for revert: CI jobs failing with "msg": "{{ s3_url }}: 's3_url' is undefined"
Change-Id: Iba7099988cea0c0d8254b9e202309cd9c82a984d
Added options to configure S3 cinder backup driver, so cinder backup
can use S3 storage, for safekeeping backups.
Change-Id: Id6ff6206714581555baacecebfb6d8dd53bed8ac
Render {{ openstack_service_workers }} for workers
of each openstack service is not enough. There are
several services which has to have more workers because
there are more requests sent to them.
This patch is just adding default value for workers for
each service and sets {{ openstack_service_workers }} as
default, so value can be overrided in hostvars per server.
Nothing changed for normal user.
Change-Id: Ifa5863f8ec865bbf8e39c9b2add42c92abe40616
Fixes an issue where access rules failed to validate:
Cannot validate request with restricted access rules. Set
service_type in [keystone_authtoken] to allow access rule validation
I've used the values from the endpoint. This was mostly a straight
forward copy and paste, except:
- versioned endpoints e.g cinderv3 where I stripped the version
- monasca has multiple endpoints associated with a single service. For
this, I concatenated logging and monitoring to be logging-monitoring.
Closes-Bug: #1965111
Change-Id: Ic4b3ab60abad8c3dd96cd4923a67f2a8f9d195d7
This patch is removing api related configuration
from service's config files as we are using
apache mod_wsgi and this configuration is not
used.
Change-Id: I69a1542a6f24214fbf6e703782aefb566de4fb26
Following up on [1].
The 3 variables are only introducing noise after we removed
the reliance on Keystone's admin port.
[1] I5099b08953789b280c915a6b7a22bdd4e3404076
Change-Id: I3f9dab93042799eda9174257e604fd1844684c1c
"Smoke tests" for barbican, cinder, glance and keystone have been removed as discussed in PTG April 2022.
Signed-off-by: Tim Beermann <beermann@osism.tech>
Change-Id: I613287a31e0ea6aede070e7e9c519ab2f5f182bd
Add an enable_cinder_backend_pure_iscsi and
enable_cinder_backend_pure_fc options to etc/kolla/globals.yml
to enable use of the FlashArray backend.
Update the documentation to include a section on configuring
Cinder with the FlashArray.
Implements: blueprint pure-cinder-driver
Change-Id: I464733f1322237321ed1ffff8636cf30bd1cbb38
Consistently use template instead of copy. This has the added
advantage of allowing variables inside ceph conf files and keyrings.
Closes-Bug: 1959565
Signed-off-by: Imran Hussain <ih@imranh.co.uk>
Change-Id: Ibd0ff2641a54267ff06d3c89a26915a455dff1c1
An FCD, also known as an Improved Virtual Disk (IVD) or
Managed Virtual Disk, is a named virtual disk independent of
a virtual machine. Using FCDs for Cinder volumes eliminates
the need for shadow virtual machines.
This patch adds Kolla support.
Change-Id: Ic0b66269e6d32762e786c95cf6da78cb201d2765
Role vars have a higher precedence than role defaults. This allows to
import default vars from another role via vars_files without overriding
project_name (see related bug for details).
Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221
Related-Bug: #1951785
The admin interface for endpoints never had any real use, the
functionality was the same as for the public or internal endpoints,
except for Keystone. Even for Keystone with API v3 it would no longer
really be needed, but it is still being required by some libraries that
cannot be changed in order to stay backwards compatible.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Icf3bf08deab2c445361f0a0124d87ad8b0e4e9d9