When OpenStack is deployed with Kolla-Ansible, by default there
are no durable queues or exchanges created by the OpenStack
services in RabbitMQ. In Rabbit terminology, not being durable
is referred to as `transient`, and this means that the queue
is generally held in memory.
Whether OpenStack services create durable or transient queues is
traditionally controlled by the Oslo Notification config option:
`amqp_durable_queues`. In Kolla-Ansible, this remains set to
the default of `False` in all services. The only `durable`
objects are the `amq*` exchanges which are internal to RabbitMQ.
More recently, Oslo Notification has introduced support for
Quorum queues [7]. These are a successor to durable classic
queues, however it isn't yet clear if they are a good fit for
OpenStack in general [8].
For clustered RabbitMQ deployments, Kolla-Ansible configures all
queues as `replicated` [1]. Replication occurs over all nodes
in the cluster. RabbitMQ refers to this as 'mirroring of classic
queues'.
In summary, this means that a multi-node Kolla-Ansible deployment
will end up with a large number of transient, mirrored queues
and exchanges. However, the RabbitMQ documentation warns against
this, stating that 'For replicated queues, the only reasonable
option is to use durable queues: [2]`. This is discussed
further in the following bug report: [3].
Whilst we could try enabling the `amqp_durable_queues` option
for each service (this is suggested in [4]), there are
a number of complexities with this approach, not limited to:
1) RabbitMQ is planning to remove classic queue mirroring in
favor of 'Quorum queues' in a forthcoming release [5].
2) Durable queues will be written to disk, which may cause
performance problems at scale. Note that this includes
Quorum queues which are always durable.
3) Potential for race conditions and other complexity
discussed recently on the mailing list under:
`[ops] [kolla] RabbitMQ High Availability`
The remaining option, proposed here, is to use classic
non-mirrored queues everywhere, and rely on services to recover
if the node hosting a queue or exchange they are using fails.
There is some discussion of this approach in [6]. The downside
of potential message loss needs to be weighed against the real
upsides of increasing the performance of RabbitMQ, and moving
to a configuration which is officially supported and hopefully
more stable. In the future, we can then consider promoting
specific queues to quorum queues, in cases where message loss
can result in failure states which are hard to recover from.
[1] https://www.rabbitmq.com/ha.html
[2] https://www.rabbitmq.com/queues.html
[3] https://github.com/rabbitmq/rabbitmq-server/issues/2045
[4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit
[5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/
[6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication
[7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933
[8] https://www.rabbitmq.com/quorum-queues.html#use-cases
Partial-Bug: #1954925
Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
Without this configuration, all mount points are reporting the same
utilisation metrics [1]. With the rslave option, all root mounts from
the host are visible in the container, so we can remove the bind mounts
for /proc and /sys.
[1] https://github.com/prometheus/node_exporter#docker
Change-Id: I4087dc81f9d1fa5daa24b9df6daf1f9e1ccd702f
Closes-Bug: #1961438
An FCD, also known as an Improved Virtual Disk (IVD) or
Managed Virtual Disk, is a named virtual disk independent of
a virtual machine. Using FCDs for Cinder volumes eliminates
the need for shadow virtual machines.
This patch adds Kolla support.
Change-Id: Ic0b66269e6d32762e786c95cf6da78cb201d2765
NSXP is the OpenStack support for the NSX Policy platform.
This is supported from neutron in the Stein version. This patch
adds Kolla support
This adds a new neutron_plugin_agent type 'vmware_nsxp'. The plugin
does not run any neutron agents.
Change-Id: I9e9d8f07e586bdc143d293e572031368af7f3fca
Allow operators to set haproxy socket to admin level.
This is done via the flag haproxy_socket_level_admin which
is set to "no" by default.
Closes-Bug: 1960215
Signed-off-by: Imran Hussain <ih@imranh.co.uk>
Change-Id: Ia0da89288d68f5803ace1934c013053f12343195
The apparmor_parser actually doesn't remove the file or doesn't create
the symlink in '/etc/apparmor.d/disable' itself so the next run of the
baremetal role will fail with the error "Unable to remove "libvirtd".
Even more after reboot, the profile is still active. We need to
disable the profile completly ourselves. This change fixes the
idempotents of the baremetal role.
Closes-Bug: #1960302
Change-Id: I162e417387393e806886b1c9ea8053b89778b4d1
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
The default configuration was changed to use the advanced cache pool in
keystonemiddleware 9.3.0 (Xena release) [1].
This reverts commit 5a52d8e4a0c5d4c246deb8851ef893df63ee0847 (except the
release note).
[1] https://review.opendev.org/c/openstack/keystonemiddleware/+/773939
Change-Id: I290d0a81c57c189b6eb62fc3eee3ed19f441671b
parted hangs waiting for user input (see examples below)
on Debian and Ubuntu nodes which have created a cinder
volume on lvm, causing POST_FAILURE of the entire CI job.
Zun (Cinder iSCSI LVM) jobs are affected.
parted seemingly tries to interpret contents of the created
volume and fails miserably.
Since there is no reason why we would need to see the output
of parted specifically, this patch is switching to use
lsblk to simply list visible block devices.
Along with the rest of the commands, this should be just
the right level of detail.
And we avoid having parted interpret internals of otherwise
opaque block devices.
Example issues:
Warning: Not all of the space available to
/dev/mapper/cinder--volumes-cinder--volumes--pool appears to be used, you can
fix the GPT to use all of the space (an extra 9732096 blocks) or continue with
the current setting?
Fix/Ignore?
Warning: Not all of the space available to
/dev/mapper/cinder--volumes-cinder--volumes--pool-tpool appears to be used, you
can fix the GPT to use all of the space (an extra 9732096 blocks) or continue
with the current setting?
Fix/Ignore?
Warning: Not all of the space available to
/dev/mapper/cinder--volumes-cinder--volumes--pool_tdata appears to be used, you
can fix the GPT to use all of the space (an extra 9732096 blocks) or continue
with the current setting?
Fix/Ignore?
Change-Id: I7beecf2dd6c49c8934722cf22efa74e920ecb060
Enable libvirt TLS in CI jobs with TLS enabled.
Uses the new functionality of the certificates command to generate
certificates for both libvirt client and server (added in
I1bde9fa018f66037aec82dc74c61ad1f477a7c12).
Change-Id: Ica304685b043f699799ccee6c9c2fbcf968888db
Adds support to the 'kolla-ansible certificates' command for generating
certificates for libvirt TLS, when libvirt_tls is true. The same
certificate and key are used for the libvirt client and server.
The certificates use the same root CA as the other generated
certificates, and are written to
{{ node_custom_config }}/nova/nova-libvirt/, ready to be picked up by
nova-libvirt and nova-compute.
Change-Id: I1bde9fa018f66037aec82dc74c61ad1f477a7c12
Enables zun to access cinder volumes when cinder is configured to use
external ceph.
Copies ceph config file and ceph cinder keyring to /etc/ceph in
zun_compute container.
Closes-Bug: 1848934
Change-Id: Ie56868d5e9ed37a9274b8cbe65895f3634b895c8
This fixes a bug in registering identity providers
The bug was caused by a missing `=` in the openstack command
Add the missing `=` after `--os-user-domain-name`
Closes-Bug: #1959022
Change-Id: I73f80cd2c81a3944de0933e60f5768956a1a3b70