deployments
This allows services to work with etcd when coordination is enabled
for TLS internal deployments. Without this fix, we fail to connect to
etcd with the coordination backend and the service itself crashes.
Change-Id: I0c1d6b87e663e48c15a846a2774b0a4531a3ca68
Previously, when running one of the following commands:
kolla-ansible deploy --check
kolla-ansible genconfig --check
deployment or configuration generation fails for various reasons.
MariaDB fails to lookup the existing cluster.
Keystone fails to generate cron config.
Nova-cell fails to get the cell settings.
Closes-Bug: #2002661
Change-Id: I5e765f498ae86d213d0a4379ca5d473db1499962
Currently we do not follow the RabbitMQ advice on replicas here:
https://www.rabbitmq.com/ha.html#replication-factor
Here we reduce the number of replicas to n // 2 + 1 as advised
above. The hope it this helps speed up recovery from rabbit
issues.
Related-Bug: #1954925
Change-Id: Ib6bcb26c499c9884faa4a0cd51abaec00cacb096
Adds the flag `rabbitmq_ha_replica_count` to change how many different
nodes a queue should be mirrored across. If the value is not set, then
it defaults to "ha-mode":"all". This value is unset by default to avoid
any unexpected changes to the RabbitMQ definitions.json file, as that
would trigger an unexpected restart of RabbitMQ during the next deploy.
Change-Id: Iee98cd937197a73a3b04aa8501fa325e8ecfff24
etcd-compatible tooz drivers do not support multiple endpoints via
backend_url. We can put a loadbalancer in front of etcd and configure
backend_url to use the VIP instead. The issue with hard coding the first
host is that we break coordination if we take this host offline. In the
case of cinder, we would not be able to perform any volume related
operations.
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: Ib684501ba03c386dc5ac71e5cbea05c99f191665
By default ha-promote-on-shutdown=when-synced. However we are seeing
issues with RabbitMQ automatically recovering when nodes are restarted.
https://www.rabbitmq.com/ha.html#cluster-shutdown
Rather than waiting for operator interventions, it is better we allow
recovery to happen, even if that means we may loose some messages.
A few failed and timed out operations is better than a totaly broken
cloud. This is achieved using ha-promote-on-shutdown=always.
Note, when a node failure is detected, this is already the default
behaviour from 3.7.5 onwards:
https://www.rabbitmq.com/ha.html#promoting-unsynchronised-mirrors
This patch adds the option to change the ha-promote-on-shutdown
definition, using the flag `rabbitmq_ha_promote_on_shutdown`. This
value is unset by default to avoid any unexpected changes to the
RabbitMQ definitions.json file, as that would trigger an unexpected
restart of RabbitMQ during the next deploy.
Related-Bug: #1954925
Change-Id: I2146bda2c72ddac2c9923c6941b0596395fd9ab5
This patch fixes kolla_docker module
as it did not take into account common_options
parameter. From patchset it's visible that module's
default values are used always - even if user overrided
some param in common_options dict.
Closes-Bug: #2003079
Change-Id: I677fde708dd004decaff4bd39f2173d8d81052fb
This patch add connection local for above mentioned task as
kolla-ansible can be executed in docker container as in
my case.
When there is no connection: local, ansible is trying to connect
to localhost via ssh where specified python script is not available.
After connection: local everything is working as expected as file
is found inside container
Closes-Bug: #2004224
Change-Id: I219a958b4f101efb71a2935e6d910dae5c65f0be
neutron_tls_proxy and glance_tls_proxy are using haproxy container
image. Pin them to haproxy_tag directly.
Change-Id: I73142db48ebe6641520d21b560f16de892e07c34
This change serialises the neutron l3 agent restart process and adds a
user configurable delay between restarts. This can prevent connectivity
loss due to all agents being restarted at the same time.
Routers increase the recovery time, making this issue more prevalent.
Change-Id: I3be0ebfa12965e6ae32d1b5f13f8fd23c3f52b8c
This commit adds SystemdWorker class to kolla_docker ansible module.
It is used to manage container state via systemd calls.
Change-Id: I20e65a6771ebeee462a3aaaabaa5f0596bdd0581
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Signed-off-by: Martin Hiner <m.hiner@partner.samsung.com>
As rabbitmq's configuration file is not ini or yaml file,
there is no option to extend configuration by new config
options via merge_configs or merge_yaml.
This patch moves config options to dictionary
so it can be overriden in /etc/kolla/globals.yml.
Change-Id: I5cd772f4fb80a0e200fb24d67be735ca81e3fdeb
Makes sure the facts required to generate octavia.conf are available
when using genconfig.
This change also ensures that the necessary tasks run when using Ansible
check mode.
Closes-Bug: #1987299
Change-Id: Ib8fbee2d3abdcfd2eae0f9b3e9b69eeb0e3086e0
A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
When running in check mode, some prechecks previously failed because
they use the command module which is silently not run in check mode.
Other prechecks were not running correctly in check mode due to e.g.
looking for a string in empty command output or not querying which
containers are running.
This change fixes these issues.
Closes-Bug: #2002657
Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
Prevent the haproxy-config role from attempting to modify firewalld when
running kolla-ansible genconfig.
Closes-Bug: #2002522
Change-Id: Ie8a524cc944aa8cb9cf0999b1b8da79f30b40092
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
Setting ovn-monitor-all to 'true' will configure
ovn-controller to monitor all OVS database records
unconditionally. That will release some CPU resource
from OVS Southbound DB but will increase number of events
coming to ovn-controller.
Default value is 'false'.
Change-Id: I291e166013d8c88f00e84ceaf308251c352c9a79
ovn-controller should be deployed first according to OVN upgrade guide.
Since we are getting newer OVN/OVS versions from RDO/Ubuntu in a cycle,
let's apply that to deployment.
Closes-Bug: #1979329
Change-Id: I017aec611a057db1634cfc2634164b21cb210193
Regularly, we experience issues in Kolla Ansible deployments because we
use wrong options in OpenStack configuration files. This is because
OpenStack services ignore unknown options. We also need to keep on top
of deprecated options that may be removed in the future. Integrating
oslo-config-validator into Kolla Ansible will greatly help.
Adds a shared role to run oslo-config-validator on each service. Takes
into account that services have multiple containers, and these may also
use multiple config files. Service roles are extended to use this shared
role. Executed with the new command ``kolla-ansible validate-config``.
Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
Add file to the reno documentation build to show release notes for
stable/zed.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/zed.
Sem-Ver: feature
Change-Id: I8f24a2318b5bd5ff60a235c093db022344dec644