The /var/lib/docker-puppet is deprecated and can now be found under
/var/lib/container-puppet. We don't have Docker anymore so we try to avoid
confusion in the directories. The directory still exists but a readme
file points to the right directory.
Change-Id: Ie3d05d18e2471d25c0c4ddaba4feece840b34196
With this change we add an ansible variable called
'tripleo_minor_update' set to true only during the update_steps_playbook
which get run during a minor update.
Then inside common/deploy-steps-tasks when starting containers with
paunch we export this 'tripleo_minor_update' ansible variable and
push it inside the 'TRIPLEO_MINOR_UPDATE' environment variable.
Inside change Id1d671506d3ec827bc311b47d9363952e1239ce3 we will then
use the env variable and export it to the restart_bundles in order
to detect if we're inside a minor update workflow (as opposed to
a redeploy - aka stack update). The testing that has been done is
described in the above change.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: Ib3562adbd83f7162c2aeb450329b7cc4ab200fc2
This is used in order to point where podman must push its logs.
Two scripts are using it:
- docker-puppet.py
- paunch (near future - see https://review.openstack.org/#/c/635438/)
This will allow to get the stdout for all containers, even when they
are removed before we can actually run "podman logs container_name".
Related-Bug: #1814897
Change-Id: Idc220047d56ce0eb41ac43903877177c4f7b75c2
Currently, docker daemon runtime has a default --log-driver set
to journald.
Podman lack of daemon prevent such a global application, meaning
we have to set that driver for each and every container when we
either create or run them.
Notes:
- podman only supports "json-file", and it's not even a json.
- docker json-file doesn't support "path" option, making this output
unusable in the end: logs end in
/var/lib/docker/containers/ID/ID-json.log
Related-Bug: #1814897
Change-Id: Ia613fc3812aa34376c3fe64c21abfed51cfc9cab
Currently this assumes all tasks will run on the primary controller
but because of composable roles, that may not be the case.
An example is if you deploy keystone on any role other than the
role tagged primary e.g Controller by default, we don't create
any of the users/endpoints because the tasks aren't written to
the role unless keystone actually runs there.
Closes-Bug: #1792613
Change-Id: Ib6efd03584c95ed4ab997f614aa3178b01877b8c
Implicit defaults hide issues with overring ansible variables as we
pass values in from deploy-steps.j2.
Make no implicit defaults for variables passed into deploy steps via
ansible vars. Only expect those take the values defined in the caller
deploy-steps.j2 playbook template. Add missing params and vars for
templates to propagate ansible values for external deploy/upgrade,
upgrade/update and post upgrade steps playbooks.
Make DockerPuppetDebug boolean to align with other booleans we pass
into deploy steps via ansible vars. Fix its processing in
docker-puppet.py, which is defaults for DockerPuppetDebug: ''
converted into 'false' in deploy steps tasks playbook, and then
that becomes always True in docker-puppet.py.
Related-Bug: #1799914
Change-Id: Ia630f08f553bd53656c76e5c8059f15d314a17c0
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
In docker-puppet.py, we only create docker-puppet.sh script if it
doesn't exist yet. It's not useful to re-create it and it can be
dangerous to regenerate the script while docker-puppet.py is running,
since we bind mount the script to the containers.
It's possible that during a multi-process task, the script changes and
then the entrypoint fails to run correctly if the interpreter is not
present in the script.
This patch makes sure that we create the script only when needed, and
also that we remove it before running docker-puppet.py, which will be
useful when doing clean deployments or upgrades.
Context: https://github.com/containers/libpod/issues/1844
Change-Id: I0ac69adb47f59a9ca82764b5537532014a782913
With the current configuration, HAProxy logs are in the host journal.
This isn't really friendly when you want to debug issues with this service.
This patches ensures HAProxy logs are in a dedicated file, using the syslog
facility set in its configuration.
Depends-On: I8fee040287940188f6bc6bc35bdbdaf6c234cbfd
Change-Id: Ia615ac07d0c559deb65e307bb6254127e989794d
This can be used to control whether puppet modules are consumed
from the baremetal host or from the container. Our default
is to consume these from the host so that deployment
archive tarballs can be used to extra puppet modules from
the host.
Since I61e35d8118c1de4c2976f496e8a6c9c529f3d91f we've had
puppet-tripleo in our containers however so using this
location would be possible as well.
Change-Id: I73026e66bcfafd1c582916141b5b1cf0ce0dc36c
Allow running docker-puppet.py in
debug mode, depending on the value
of the ansible variable docker_puppet_debug.
This variable takes its value from DockerPuppetDebug,
which is set to true in the env file
environments/config-debug.yaml.
Change-Id: I7c88aa22dce3396c6a79843ac13db479ed987f9d
Now that we are running this at fedora28 with python3 we need to use
python3 to run python scripts at playbooks
Depends-On: I2c471724374da44eeddc4680b268bc362572ee27
Closes-Bug: #1802531
Change-Id: I42b18b228bfe361d19b580a853328c1a6c896257
These tasks should have check_mode:no set so that they run in check
mode, as the variables they register are used in later tasks. Otherwise,
ansible in check mode fails with undefined variable errors.
Also, some tasks may fail due to not all requirements being available
since those requirements were not created by previous tasks that were
also ran in check mode.
This adds ignore_errors to these tasks, and sets the value to the
boolean ansible_check_mode which is provided by ansible and set based on
whether or not --check was passed to the ansible command line.
Change-Id: I84bc3c14ede37959a4078fd14ce4661b7bd23f84
With this patch, we're able to deploy a "standalone" stack using
podman on a fully-enabled SELinux system.
Change-Id: I4bfa2e1d3fe6c968c4d4a2ee1c2d4fb00a1667a1
Adds initial check mode support for the paunch container startup
configuration and kolla config files. This cleans up the formatting of
the generated files so that the diff shown duing check mode with --diff
is useful.
We can't actually run paunch during check mode as it doesn't yet have
any support for a dry run mode.
Change-Id: I9add7b9fda50847c111e91735bd55a1ddf32f696
Adds check mode support for docker_puppet_tasks.
Since it's not possible to reliably determine what these tasks do, we
can't actually run them to get an idea of what might be changed. We can
however show the diff of the json file to get an idea of what would be
run.
Change-Id: I19e8bc9eb93d8acc8ee7d737770f9cc7e63f7a27
Adds check mode support for docker_puppet. The updated json file is
written to /var/lib/docker-puppet/check-mode/docker-puppet.json
during check mode and then diffed with the existing version at
/var/lib/docker-puppet/docker-puppet.json.
When docker-puppet.py is run during check mode, the updated json file
under the check-mode directory is passed to the command. All generated
config files are then written under /var/lib/config-data/check-mode,
which is then recursively diffed with the existing config under just
/var/lib/config-data to report on all changed config files.
Change-Id: I5c831e9546f8b6edaf3b0fda6c9fbef86c825a4c
Adds check mode support for puppet host tasks.
This works by writing the new puppet host manifest under
/var/lib/tripleo-config/check-mode, and diffing it against the existing
version of the manifest.
Puppet is also run with --noop, so that it only reports on what changes
would have been made.
It also uses the check mode hiera configuration at
/etc/puppet/check-mode/hiera.yaml if it exists so that the updated hiera
data is also accounted for when puppet runs with --noop.
Depends-On: Ibe0c2ab79c35f04ce51e7a1ade0e8ff72b430163
Change-Id: I112b63096c8dce05176b0939a7678bec02987294
with_dict is replaced by ansible's loop:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#with-dict
This migrates tasks using with_dict over to use loop instead.
Additionally, when using loop (or with_dict), the entire loop item is
logged by default. This makes these tasks very verbose since we're
looping over large json/yaml files. Instead, use loop_control and label
ot only log the item key. The entire data structure already exists in
the config-download directory anyway, so there's no need to log the
whole thing to the console.
Change-Id: I1fc7431dfc662212b6ca64f4f738760f25b0c30b
Adds the following tags to relevant tasks in deploy-steps-tasks.yaml
that are common to all roles:
- host_config
- container_config
- container_config_tasks
- container_config_scripts
- container_startup_configs
The tags are tool agnostic, so hopefully they won't have to be updated
over time. They allow users to run only specific parts of the common
tasks.
Change-Id: Ia7476da222218411caddae887f99c029b4bccf23
This commit removes the "when failed" from the task title to eliminate
confusion.
These tasks always run to show the debug output for the previous task,
regardless of whether the the previous task failed or not. They will
show the debug output as long as the previous task finished (success or
failure).
Change-Id: I4e52bdc18885f13793550e5032fb1316a13b248c
As Podman doesn't create host location for bind-mount, we have to ensure
directories actually exists.
SELinux labels are also important, since Podman has selinux enabled by default,
and there is currently no way to disable it like in Docker.
Change-Id: Ic1bede203e8199a296944273cb334027dab940fe
Create a new parameter in TripleO: ContainerCli.
The default is set to 'docker' for backward compatibility but it allows
to also set to 'podman'.
When podman is selected, the right commands will be run so docker-puppet
can configure the containers when Podman is the selected container
library backend.
It removes the tripleo_logs:/var/log/tripleo/ mount that was used
by tripleo-ui but we shouldn't do that here. We'll create a bind mount
in tripleo-ui container later.
It run puppet with FACTER_hostname only if NET_HOST is disabled.
Change-Id: I240b15663b720d6bd994d5114d43d51fa26d76cc
Co-Authored-by: Martin André <m.andre@redhat.com>
There is a CI blocker LP 1782598 to deal with ASAP.
Then, we can fix this in the scope of
https://review.openstack.org/#/c/584119/
This reverts commit 915c1ebdd79fecb57a0719997a56c34685307431.
Change-Id: I8f03d8a588e58202c3628c72144a232729041c89
If an operator has non-paunch managed containers (ceph/openshift), we
may not want to fail the deployment if those are unhealthy.
Change-Id: Ifd3e67a66b3224d0ed5f7ef12ba27b06f78c8556
After starting the containers, we should make sure they are healthy
before continuing. If any containers are unhealthy we should fail
quickly and provide output showing which container is unhealthy.
Change-Id: I785ddb45779b6699fc839fdddb9c804dd1b1da5d
The ansible command generated in ansible-playbook-command.sh by default
have "--become" in it.
This commit removes "become: true" where is used to avoid confusion in
deploy steps. Today we explicitly set "become: false" in deploy-steps.j2
for certain actions, so there's no meaning of having also "become: true"
for the other ones.
We have a release note [1] that explains why the "become" was
introduces, but maybe we can revisit it.
[1] releasenotes/notes/use-become-true-in-deploy-steps-playbook-01decb18d895879f.yaml
Change-Id: Ic666b4ecaecf0591dd8bb0406f239649b20b9623
- do not use set_fact when a lookup can be done directly in the task
- use multi-line YAML for easier legibility
- ignore errors in file lookup plugin when file does not exist and set defaults
Change-Id: I832a2ec34f4ed4a87e30d0c88f4c60bcf2f4c151
"role_name" is internal to Ansible, we should not use it.
This patch uses the new variable set in the inventory to use a specific
TripleO var: tripleo_role_name which is the TripleO role name and not
the Ansible role names, both things are very different.
Depends-On: I57c4eac87e2f96dfe5490b111cd2508505715d56
Change-Id: Iecaf6f1b830e65be2f9e2e44431054fe46f9f565
Related-Bug: #1771171
set_fact logs the fact value. In the case of reading the role_data_*
files, this is very verbose as the files can be large. Use no_log: True
to make these tasks less verbose. The content is saved in the
config-download output already, so no useful info is lost.
Change-Id: Ie6f75113194961628a0c9bdfbfbf5d88a18059eb
Closes-Bug: #1760996
Add blank lines between the Ansible tasks and plays in the stack
outputs. This is an improvement in readability for the user.
Change-Id: I52ebd9081cacf213ac29f1d24e73db6ea5cfe33f
This wires in a heat parameter that can be used to disable the
baremetal (Puppet) deployment tasks. Useful for testing
some lightweight/containers only deployments.
Change-Id: I376418c618616b7755fafefa80fea8150cf16b99
Without the extra bool this when block gets evaluated as a string.
Given that it is always present this means enable_debug has been
enabled regardless of the end user setting.
Change-Id: I9f53f3bca4a6862966e558ea20fe001eabda7bcf
Closes-bug: #1754481
In https://review.openstack.org/#/c/525260/, we moved the creation of
various RoleData driven config files to deploy-steps-tasks.yaml, and to
consume the values from various role_data_* variables that were written in
the inventory (see https://review.openstack.org/#/c/528354/).
However, we were already downloading and saving the RoleData to separate
files via config download. We should consume from those files instead of
the inventory. That has the advantage that one can quickly modify and
iterate on the local files, and have those changes applied. That is
harder to do when these values are in the inventory, and not possible to
do when using dynamic inventory.
Since the tasks will fail trying to read from the files when not using
config-download, conditional local_action tasks that use the stat module
first verify the existence of the files before attempting to read their
contents. If they don't exist, the values fall back to whatever has been
defined by the ansible variable.
Change-Id: Idfdce6f0a778b0a7f2fed17ff56d1a3e451868ab
Closes-Bug: #1749784
This makes it clearer that the previous task failed, which isn't
immediately evident from the ansible task output due to the failed_when
on those tasks.
Change-Id: I765208d5865f6e5a292e5b52c572e2e79540c663
Closes-Bug: #1748443
Add the {{step}} var to a couple of task names from
deploy-steps-tasks.yaml where it was missing. Makes the output a bit
more consistent and user friendly.
Change-Id: I0a1b3f7f62543107b2f82ee57d75e65ecc7e02d4
This moves the writing of various files that are consumed by the
tasks in deploy-steps-tasks.yaml, hopefully this is clearer, and
it also means we can drive the creation of these files via ansible
directly using https://review.openstack.org/528354
Change-Id: I173d22ebcbc986cefdef47f81298abe10ce8591b
In the event a step has no services defined, we must still write the
config, as this is needed if services are disabled on update such that
a step becomes empty - we must run paunch on every step or the cleanup
of the "old" services does not happen.
Closes-Bug: 1742915
Change-Id: Iee01002f56b5311560557f2bf6f053601b9d43d7