It turned out the previous fix ([1]) was incomplete.
Additionally, it seems we have to limit Tacker server
to one instance co-located with conductor.
[1] https://review.opendev.org/684275
commit b96ade3cf01009d822f85744efee523127f2674c
Change-Id: I9ce27d5f68f32ef59e245960e23336ae5c5db905
Closes-bug: #1853715
Related-bug: #1845142
The [placement].os_interface option was replaced by
[placement].valid_interfaces in Queens and was removed in Rocky.
Change-Id: I306c57305b9088159dd18af4aa373bbc39a8b881
Closes-Bug: #1853621
As part of the effort to implement Ansible code linting in CI
(using ansible-lint) - we need to implement recommendations from
ansible-lint output [1].
One of them is to stop using local_action in favor of delegate_to -
to increase readability and and match the style of typical ansible
tasks.
[1]: https://review.opendev.org/694779/
Partially implements: blueprint ansible-lint
Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
The "os_region" param is missing in the ironic_neutron_agent.ini.j2
file. Without specifying the region, the neutron service will randomly
pick a region for the ironic-neutron-agent. Therefore, a list of
incorrect agents might be created in the neutron database "agents"
table for nodes from other regions. To list all neutron agents, use
'openstack network agent list'.
Change-Id: Idec265230d0ab63b7559d94690c059608dc2617e
Closes-bug: #1853464
Qinling could not be deployed due to use of an undefined variable
(you guessed it, it was a typo).
Change-Id: Iadbf269e66decc0a4c6b24b3d828ac560adeb7a7
Closes-bug: #1853201
1. Adjust the order of src and dest for template module
2. Remove the double quotes from task's name to keep consistence with
the others.
3. Add a space after "|"
Change-Id: I580517d5b95dcaa34841def70ec6f57a5cbe0229
* Deploy services using kolla-ansible deploy
* Reconfigure the image for one or more services to use an invalid
* config
* Deploy/reconfigure services using kolla-ansible reconfigure
The invalid config could be a wrong docker registry, wrong image name,
wrong tag, etc.
The restart handler for the service fails, and the old container is
left running.
The restart handler for the service fails, and the old container is
stopped and removed. This leaves the service in a broken state.
This change fixes the issue by pulling the image if necessary prior to
stopping and removing the container.
Change-Id: I85b2a1b224d4c4d85c32c4922a2cd2c41171a1dc
Closes-Bug: #1852572
This variable was removed in the Train cycle, and a precheck added for
its use. This precheck can now be removed.
Change-Id: I6d9f0b577631ff9443deecf8ef9d94ca217674c5
Support for deploying neutron-lbaas was removed in the Train release. We
no longer need the task to remove the container in the upgrade process.
Change-Id: Ie336f68c710616de29f34dd4011e137ec056973b
During the Stein release the default storage backend for cloudkitty was
switched to influxdb. To aid this transition we added creation of the
influxdb database during upgrade. Now that this transition is complete
we can remove it.
Change-Id: Ieb247f36af932d3a357504c7419ead44b10d1301
Now that the stable/train branch has been cut, we can set the previous
release to Train. This is done in kolla-ansible for rolling upgrades,
and in CI configuration for upgrade tests.
Change-Id: I9d903543936e59aeeee939b32afce3e63b8c4394
Allow users to create/override HAProxy service configuration by
copying over '*.cfg' files from {{ node_custom_config
}}/haproxy/services.d/
Ex: /etc/kolla/config/haproxy/services.d/radosgw.cfg
Change-Id: Id84e3b6e62e544582d6917047534e846e026798d
Signed-off-by: Keith Plant <kplantjr@gmail.com>
If you do the following:
* Install legacy Docker (1.12.0) using kolla-ansible bootstrap-servers
with the Rocky release or earlier.
* Update to Docker CE, using kolla-ansible bootstrap-servers with the
Stein release or later
The package is upgraded, but docker is stopped. This prevents the 'Wait
for Docker to start' task from completing, since Docker will not start.
Seen on CentOS 7.6, Docker CE 19.03.4.
This was tested and working previously, perhaps something changed with
the Docker package.
This change fixes the issue by starting and enabling Docker after the
upgrade.
Change-Id: If6e9c91f3e8d0ec366eea7ca506c6d10dbf11c3a
Closes-Bug: #1852066
After performing a recovery of MariaDB, the mariadb containers are left
without a restart policy. This leaves them unable to recover from the
crash of a single galera node. There is another issue, in that the
'master' node is left in a bootstrap configuration, with the
--wsrep-new-cluster argument configured as BOOTSTRAP_ARGS.
This change fixes these issues by removing the restart policy of 'no'
from the 'slave' containers, and recreating the master container without
the restart policy or bootstrap arguments.
Change-Id: I36c875611931163ca2c29ae93b71d3af64cb197c
Closes-Bug: #1851594
In source images, keystone-manage is installed to a virtualenv in
/var/lib/kolla/venv. This is not in the PATH for cron jobs, which always
use PATH=/usr/bin:/bin. This results in the following error:
/usr/bin/fernet-rotate.sh: line 3: keystone-manage: command not found
However this error is not typically visible, since cron logs to syslog
and we do not configure fluentd to collect these logs.
This change configures the PATH in the fernet-rotate.sh script for
source images.
Change-Id: Ib49ea586d36ae32d01b9610a48b13798db4a4cd5
Closes-Bug: #1850711