The dib_env_vars variable in the Bifrost's dib.yml file can contain
the DIB_BLOCK_DEVICE_CONFIG environment variable which is always the
Multiline-YAML data. By default, the format of the data is not
preserved while the configuration is merged and saved for the
bifrost-deploy container.
This is because Ansible uses the PyYAML library which has a default
80 symbol string length limit. The official Ansible documentation [1]
recommends using to_yaml or to_nice_yaml filters with width parameter.
This change adds the same ability to the merge_yaml Ansible plugin.
1. https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#formatting-data-yaml-and-json
The related change for the diskimage-builder to solve the issue with
incorrect data provided by Kolla-Ansible is also provided:
I3b74ede69eb064ad813a9108ec68a228e549e8bb
Closes-Bug: #2014980
Related-Bug: #2014981
Change-Id: Id79445c0311916ac6c1beb3986e14f652ee5a63c
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
Puts the RabbitMQ node into maintenance mode before restarting the
container. This will make the node shutdown less disruptive. For details
on what maintenance mode does, see:
https://www.rabbitmq.com/upgrade.html#maintenance-mode
Change-Id: Ia61573f3fb95fe8fcde6b789ca77ef5b45fe0a65
Since CVE-2022-29404 is fixed [1,2] the default value for the
LimitRequestBody directive in the Apache HTTP Server has been changed
from 0 (unlimited) to 1 GiB. This limits the size of images (for
example) uploaded in Horizon. This change add the ability to
configure the limit.
1. https://access.redhat.com/articles/6975397
2. https://ubuntu.com/security/CVE-2022-29404
Closes-Bug: #2012588
Change-Id: I4cd9dd088cbcf38ff6f8d188ebcc56be7d9ea1c9
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
Following ideas here:
https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit
Make sure old messages with no consumer are dropped after the message
TTL of 10 mins, longer than the 1 min RPC timeout.
Also ensure queues expire after an hour of inactivity, so queues from
removed nodes or renamed nodes don't grow over time.
Change-Id: Ifb28ac68b6328adb604a7474d01e5f7a47b2e788
Adds two new flags to alter behaviour in RabbitMQ:
* `rabbitmq_message_ttl_ms`, which lets you set a TTL on messages.
* `rabbitmq_queue_expiry_ms`, which lets you set an expiry time on queues.
See https://www.rabbitmq.com/ttl.html for more information on both.
Change-Id: I51ca37ffbb1bb5c07f2d39873f0f33ca20263f2a
Changes the default value of `rabbitmq-ha-promote-on-shutdown` to
`"always"`.
We are seeing issues with RabbitMQ automatically recovering when nodes
are restarted. https://www.rabbitmq.com/ha.html#cluster-shutdown
Rather than waiting for operator interventions, it is better we allow
recovery to happen, even if that means we may loose some messages.
A few failed and timed out operations is better than a totaly broken
cloud. This is achieved using ha-promote-on-shutdown=always.
Note, when a node failure is detected, this is already the default
behaviour from 3.7.5 onwards:
https://www.rabbitmq.com/ha.html#promoting-unsynchronised-mirrors
Related-Bug: #1954925
Change-Id: I484a81163f703fa27112df22473d657e2a9ab964
deployments
This allows services to work with etcd when coordination is enabled
for TLS internal deployments. Without this fix, we fail to connect to
etcd with the coordination backend and the service itself crashes.
Change-Id: I0c1d6b87e663e48c15a846a2774b0a4531a3ca68
For Masakari and HACluster to work properly, the hostnames used
in HACluster need to match with the hostnames used in Nova.
Change-Id: Iac917ef4471905caab591cd64eab379e150a8524
Previously, when running one of the following commands:
kolla-ansible deploy --check
kolla-ansible genconfig --check
deployment or configuration generation fails for various reasons.
MariaDB fails to lookup the existing cluster.
Keystone fails to generate cron config.
Nova-cell fails to get the cell settings.
Closes-Bug: #2002661
Change-Id: I5e765f498ae86d213d0a4379ca5d473db1499962
Currently we do not follow the RabbitMQ advice on replicas here:
https://www.rabbitmq.com/ha.html#replication-factor
Here we reduce the number of replicas to n // 2 + 1 as advised
above. The hope it this helps speed up recovery from rabbit
issues.
Related-Bug: #1954925
Change-Id: Ib6bcb26c499c9884faa4a0cd51abaec00cacb096
Adds the flag `rabbitmq_ha_replica_count` to change how many different
nodes a queue should be mirrored across. If the value is not set, then
it defaults to "ha-mode":"all". This value is unset by default to avoid
any unexpected changes to the RabbitMQ definitions.json file, as that
would trigger an unexpected restart of RabbitMQ during the next deploy.
Change-Id: Iee98cd937197a73a3b04aa8501fa325e8ecfff24
etcd-compatible tooz drivers do not support multiple endpoints via
backend_url. We can put a loadbalancer in front of etcd and configure
backend_url to use the VIP instead. The issue with hard coding the first
host is that we break coordination if we take this host offline. In the
case of cinder, we would not be able to perform any volume related
operations.
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: Ib684501ba03c386dc5ac71e5cbea05c99f191665
By default ha-promote-on-shutdown=when-synced. However we are seeing
issues with RabbitMQ automatically recovering when nodes are restarted.
https://www.rabbitmq.com/ha.html#cluster-shutdown
Rather than waiting for operator interventions, it is better we allow
recovery to happen, even if that means we may loose some messages.
A few failed and timed out operations is better than a totaly broken
cloud. This is achieved using ha-promote-on-shutdown=always.
Note, when a node failure is detected, this is already the default
behaviour from 3.7.5 onwards:
https://www.rabbitmq.com/ha.html#promoting-unsynchronised-mirrors
This patch adds the option to change the ha-promote-on-shutdown
definition, using the flag `rabbitmq_ha_promote_on_shutdown`. This
value is unset by default to avoid any unexpected changes to the
RabbitMQ definitions.json file, as that would trigger an unexpected
restart of RabbitMQ during the next deploy.
Related-Bug: #1954925
Change-Id: I2146bda2c72ddac2c9923c6941b0596395fd9ab5