Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the check-containers.yml include, the
included file only has a single task, so the overhead of skipping this
task will not be greater than the overhead of the task import. It
therefore makes sense to switch to use import_tasks there.
Partially-Implements: blueprint performance-improvements
Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
non-root user has no permission to create directory under /opt
directory. use "become: true" to resolve it.
Change-Id: I155efc4b1e0691da0aaf6ef19ca709e9dc2d9168
Refactor service configuration to use the copy certificates task. This
reduces code duplication and simplifies implementing encrypting backend
HAProxy traffic for individual services.
Change-Id: I0474324b60a5f792ef5210ab336639edf7a8cd9e
This is a follow up to I001defc75d1f1e6caa9b1e11246abc6ce17c775b. To
maintain previous behaviour, and ensure we catch any host configuration
changes, we should perform host configuration during upgrade.
Change-Id: I79fcbf1efb02b7187406d3c3fccea6f200bcea69
Related-Bug: #1860161
Currently there are a few services that perform host configuration
tasks. This is done in config.yml. This means that these changes are
performed during 'kolla-ansible genconfig', when we might expect not to
be making any changes to the remote system.
This change separates out these host configuration tasks into a
config-host.yml file, which is included directly from deploy.yml.
One change in behaviour is that this prevents these tasks from running
during an upgrade or genconfig. This is probably what we want, but we
should be careful when any of these host configuration tasks are
changed, to ensure they are applied during an upgrade if necessary.
Change-Id: I001defc75d1f1e6caa9b1e11246abc6ce17c775b
Closes-Bug: #1860161
When change the cert file in /etc/kolla/certificate/.
The certificate in the container has not changed.
So I think can use kolla-ansible deploy when certificate is
changed. restart <container>
Partially-Implements: blueprint custom-cacerts
Change-Id: Iaac6f37e85ffdc0352e8062ae5049cc9a6b3db26
Signed-off-by: yj.bai <bai.yongjun@99cloud.net>
Both include_role and import_role expect role's name to be given
via "name" param instead of "role".
This worked but caused errors with ansible-lint.
See: https://review.opendev.org/694779
Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
We assume that all groups are present in the inventory, and quite obtuse
errors can result if any are not.
This change adds a precheck that checks for the presence of all expected
groups in the inventory for each service. It also introduces a common
service-precheck role that we can use for other common prechecks.
Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3
Partially-Implements: blueprint improve-prechecks
When kolla_copy_ca_into_containers is set to "yes", the Certificate
Authority in /etc/kolla/certificates will be copied into service
containers to enable trust for that CA. This is especially useful when
the CA is self signed, and would not be trusted by default.
Partially-Implements: blueprint custom-cacerts
Change-Id: I4368f8994147580460ebe7533850cf63a419d0b4
As part of the effort to implement Ansible code linting in CI
(using ansible-lint) - we need to implement recommendations from
ansible-lint output [1].
One of them is to stop using local_action in favor of delegate_to -
to increase readability and and match the style of typical ansible
tasks.
[1]: https://review.opendev.org/694779/
Partially implements: blueprint ansible-lint
Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
Support for deploying neutron-lbaas was removed in the Train release. We
no longer need the task to remove the container in the upgrade process.
Change-Id: Ie336f68c710616de29f34dd4011e137ec056973b
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
Use upstream Ansible modules for registration of services, endpoints,
users, projects, roles, and role grants.
Change-Id: I7c9138d422cc91c177fd8992347176bb54156b5a
This commit adds the functionality for an operator to specify
their own trusted CA certificate file for interacting with the
Keystone API.
Implements: blueprint support-trusted-ca-certificate-file
Change-Id: I84f9897cc8e107658701fb309ec318c0f805883b
According to Docker upstream release notes [1] MountFlags should be
empty.
1. https://docs.docker.com/engine/release-notes/#18091
"Important notes about this release
In Docker versions prior to 18.09, containerd was managed by the Docker
engine daemon. In Docker Engine 18.09, containerd is managed by systemd.
Since containerd is managed by systemd, any custom configuration to the
docker.service systemd configuration which changes mount settings (for
example, MountFlags=slave) breaks interactions between the Docker Engine
daemon and containerd, and you will not be able to start containers.
Run the following command to get the current value of the MountFlags
property for the docker.service:
sudo systemctl show --property=MountFlags docker.service
MountFlags=
Update your configuration if this command prints a non-empty value for
MountFlags, and restart the docker service."
Closes-bug: #1833835
Change-Id: I4f4cbb09df752d00073a606463c62f0a6ca6c067
Docker has no restart policy named 'never'. It has 'no'.
This has bitten us already (see [1]) and might bite us again whenever
we want to change the restart policy to 'no'.
This patch makes our docker integration honor all valid restart policies
and only valid restart policies.
All relevant docker restart policy usages are patched as well.
I added some FIXMEs around which are relevant to kolla-ansible docker
integration. They are not fixed in here to not alter behavior.
[1] https://review.opendev.org/667363
Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
A common class of problems goes like this:
* kolla-ansible deploy
* Hit a problem, often in ansible/roles/*/tasks/bootstrap.yml
* Re-run kolla-ansible deploy
* Service fails to start
This happens because the DB is created during the first run, but for some
reason we fail before performing the DB sync. This means that on the second run
we don't include ansible/roles/*/tasks/bootstrap_service.yml because the DB
already exists, and therefore still don't perform the DB sync. However this
time, the command may complete without apparent error.
We should be less careful about when we perform the DB sync, and do it whenever
it is necessary. There is an argument for not doing the sync during a
'reconfigure' command, although we will not change that here.
This change only always performs the DB sync during 'deploy' and
'reconfigure' commands.
Change-Id: I82d30f3fcf325a3fdff3c59f19a1f88055b566cc
Closes-Bug: #1823766
Closes-Bug: #1797814
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
The project has been retired and there will be no Train release [1].
This patch removes Neutron LBaaS support in Kolla.
[1] https://review.opendev.org/#/c/658494/
Change-Id: Ic0d3da02b9556a34d8c27ca21a1ebb3af1f5d34c
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
Add a possibility to mount sources as volumes to containers,
in "more than documentation" way. That will let us to use kolla
as a replacement for devstack.
Partially implements: blueprint mount-sources
Change-Id: I4868ed6829bd037e1012d1f40c4a1d1b9995bf95
Several config file permissions are incorrect on the host. In general,
files should be 0660, and directories and executables 0770.
Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
Closes-Bug: #1821579
Services were being passed as a JSON list, then iterated over in the
neutron-server container's extend_start.sh script like this:
['neutron-server'
'neutron-fwaas'
'neutron-vpnaas']
I'm not actually sure why we have to specify services explicitly, it
seems liable to break if we have other plugins that need migrating.
Change-Id: Ic8ce595793cbe0772e44c041246d5af3a9471d44
When adding the rolling upgrade support, some upgrade procedures were
modified to pull images explicitly. This is done inconsistently between
services, and is a change in behaviour from Rocky and earlier releases.
This change removes all image pulling from upgrade tasks.
Change-Id: Id0fed17714235e1daed60b83b1f30620f097eb97
With newer Docker versions `systemctl show docker` returns:
MountFlags=shared
Instead of:
MountFlags=1048576
This fix accepts either value as valid to ensure the check is not
erroneously failing.
Closes-Bug: #1791365
Change-Id: I2bd626466d6a0e189e0d85877b2be8f2b4bb37f4
This allows neutron service endpoints to use custom hostnames, and adds the
following variables:
* neutron_internal_fqdn
* neutron_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a neutron_server_listen_port option, which defaults to
neutron_server_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I87d7387326b6eaa6adae1600b48d480319d10676
Implements: blueprint service-hostnames
The neutron containers were not being restarted if only the ml2_conf.ini
file is changed. This is due to the XenAPI ml2_conf.ini config task
registering a variable of the same name as the task that generates
ml2_conf.ini for other services. Since the XenAPI service is typically
not running, the tasks show as not changed, and the handler skips
restarting the container.
This change adds a second variable for XenAPI to avoid this shadowing.
Change-Id: I77819ed8defb8a7653e1e5aec92013b1d40fbf02
Closes-Bug: #1783268
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers