Implicit defaults hide issues with overring ansible variables as we
pass values in from deploy-steps.j2.
Make no implicit defaults for variables passed into deploy steps via
ansible vars. Only expect those take the values defined in the caller
deploy-steps.j2 playbook template. Add missing params and vars for
templates to propagate ansible values for external deploy/upgrade,
upgrade/update and post upgrade steps playbooks.
Make DockerPuppetDebug boolean to align with other booleans we pass
into deploy steps via ansible vars. Fix its processing in
docker-puppet.py, which is defaults for DockerPuppetDebug: ''
converted into 'false' in deploy steps tasks playbook, and then
that becomes always True in docker-puppet.py.
Related-Bug: #1799914
Change-Id: Ia630f08f553bd53656c76e5c8059f15d314a17c0
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
When docker was used, its "create host directory tree" feature was
used. It apparently created directories with "container_var_lib_t"
type, and this prevents podman container to access the content with
AVC errors (permission denied).
The following patch ensures we get a recursive chcon running.
We're using "command" instead of "file" module because ansible doesn't
like broken symlink (in fact, they are symlink with relative path
within containers).
Change-Id: I20d00c79fc898b0c4e535662ee6a70472e075b36
When a role count is 0, we can create the deployment resources
conditionally.
Closes-Bug: #1671859
Change-Id: I467b9ded1a1b33d520cb69aa86b253a0552643f7
We use update_identifier ansible variable to check if we need to
re-run deployment tasks. Though there is no bug as we use
DeployIdentifier heat param for it, it's little confusing
(UpdateIdentifier was meant for package updates).
This also removes usage of UpdateIdentifier/update_identifier in
all_nodes_config.j2.yaml. We can deprecate/remove the heat param in a
subsequent patch.
Change-Id: I36ed62ae605a2d8f8f139b50646144b143d5e5f4
Because we call ansible to run heat to execute ansible for the HostPrep
and RoleConfig, we need to be able to pass the
ansible_python_interpreter to be used for the ansible-playbook execution
via the ansible heat hook. This change adds an PythonInterpreter
heat configuration than can be used to config it from the default
/usr/bin/python to something like /usr/bin/python3
Change-Id: Idfefe1959e5b95b7d54ce8cb5c2a569225d50847
Related-Blueprint: python3-support
HostPrepConfig was using the old way (Heat) to run Ansible. We don't
need it anymore thanks to config-download.
It removes some technical debt and reduce our number of interfaces to
configure software.
Change-Id: I2041e6892de76b0ed04d7497e3f9064bfaf58270
This can be used to control whether puppet modules are consumed
from the baremetal host or from the container. Our default
is to consume these from the host so that deployment
archive tarballs can be used to extra puppet modules from
the host.
Since I61e35d8118c1de4c2976f496e8a6c9c529f3d91f we've had
puppet-tripleo in our containers however so using this
location would be possible as well.
Change-Id: I73026e66bcfafd1c582916141b5b1cf0ce0dc36c
There was also a special flag for FFU that triggered repo setup only
on the bootstrap node, so switch this to use the per-service bootstrap
name instead.
Change-Id: I32f963a002399af4911acbf507312f378aac3599
Partial-Bug: #1792613
When we were upgrading multiple nodes at the same time,
e.g. controllers, and a taks on one of the nodes failed, the other
nodes would keep upgrading. This is undersirable and can be fixed by
adding any_errors_fatal to the Ansible plays.
Change-Id: Iad2b5e32e955da41af4d2b8dd8ad8aa1eb5dffa9
Closes-Bug: #1804468
To continue the work that was done in
I711dbb00a9c34dbd96ef179ef41bff281b0001d1, we also need to skip the common
deploy tasks if --skip-deploy-identifier is passed by the operator.
When using --skip-deploy-identifier, the UpdateIdentifier is set to
None.
Ansible doesn't see None as "", so we really need to test if the
variable is defined or not. That patches changes the logic to test that.
We also support the case where the variable is set to "", and consider
is as empty which means we want to skip the deploy/updates.
It is also doing it for the update playbooks which includes tasks from
commont deploy.
It is not replicating the exact condition as in deploy_steps_playbook.
There is no need to also check if /var/lib/docker-container-startup-configs.json
file is here because it has been created during the initial deployment.
This fix the bug where --skip-deploy-identifier wasn't honored during
stack updates.
Co-Authored-By: Thomas Hervé <therve@redhat.com>
Co-Authored-By: Sam Doran <sdoran@redhat.com>
Change-Id: Ibab17dcaeebea65135fca4f40562109c90f36c27
Related-Bug: #1796924
container_cli will be used later by update, upgrade and post upgrade tasks.
This patch is separated from actual tasks, so we can quickly iterate in
multiple patches.
Change-Id: I1ed7dec0019113f1259bce986f354723237f6a25
We should pass in the common vars to all the common plays in
deploy-steps.j2 so that tasks will have them available. Some of these
parameter driven variables were never actually wired up, so they didn't
work to begin with (such as enable_puppet/enable_debug).
Change-Id: I830e1ae21fe3e278a5f7591065d066c0a6883a9a
Closes-Bug: #1785635
To match the previous functionality when not using config-download, the
common deploy step tasks should be skipped for already deployed nodes
when using --skip-deploy-identifier.
This patch adds a task to check if one of the json configuration files
created by the common tasks already exists. If it does, and
--skip-deploy-identifier has caused an empty DeployIdentifier parameter
value, the tasks will be skipped for that node.
Change-Id: I711dbb00a9c34dbd96ef179ef41bff281b0001d1
Closes-Bug: #1796924
So far the tasks for external update/upgrade were not using the step
mechanism as other tasks, we had a single step. As external
deploy/update/upgrade tasks are being used for more things nowadays,
it's likely that we'll need to go towards a similar model like we have
for deploy/update/upgrade tasks -- proper usage of steps.
For now we have just 2:
* Step 0 for setting global facts, and performing validations.
* Step 1 for actual update/upgrade tasks. (There's an upcoming change
to run online data migrations in step 1).
Change-Id: I1933bd0eedab71caab56c0e5d93ba7927fb7c20f
Partial-Bug: #1793332
This adds a tag step[1-5] to each of the plays within the jinja2 loop to
create our 5 deployment steps. Using these tags, it's possible to run
these plays individually if desired.
Change-Id: Ic705afbf174b4597d98c2b83041ff88dd8d6664c
Create a new parameter in TripleO: ContainerCli.
The default is set to 'docker' for backward compatibility but it allows
to also set to 'podman'.
When podman is selected, the right commands will be run so docker-puppet
can configure the containers when Podman is the selected container
library backend.
It removes the tripleo_logs:/var/log/tripleo/ mount that was used
by tripleo-ui but we shouldn't do that here. We'll create a bind mount
in tripleo-ui container later.
It run puppet with FACTER_hostname only if NET_HOST is disabled.
Change-Id: I240b15663b720d6bd994d5114d43d51fa26d76cc
Co-Authored-by: Martin André <m.andre@redhat.com>
When blacklisting all servers from the primary role, the yaql expression
to get the bootstrap_server_id value fails as it tries to index the list
at the 0'th element. In this case, default the bootstrap_server_id value
to a constant string which won't match any actual server id's.
Change-Id: Ibb26245156675f64709bab075875ce4b498b4db6
Closes-Bug: #1785665
Not all vars were getting passed to deploy-steps-tasks.yaml when using
config-download. This didn't cause any issue because all the vars have
default value, but the user specified value should be honored as well.
Change-Id: I5972e1c674cf9008366c2bb10b54eb975ab8cb93
Closes-Bug: #1785635
Update the play for the server pre and post steps so that the tasks run
in parallel across all roles, instead of doing one role at a time. By
not using the "when" attribute, and relying on the tripleo_role_name var
for the list of deployments, we can force these tasks to run in parallel
across all roles.
Change-Id: I83a4deaa68d5788edb5ab13652bb30c762f337d8
Running `openstack overcloud external-update run` will update all
external services. This commit adds possibility of running `openstack
overcloud external-update run --tags ceph` to specifically update just
Ceph. It works analogically for upgrades.
Change-Id: Ic1786b6dbfa54516bfb836b450fc35452dca8cb5
Partial-Bug: #1783949
Composable service templates can now define external_update_tasks and
external_upgrade_tasks. They are meant for update/upgrade logic of
services deployed via external_deploy_tasks. The external update
playbook first executes external_update_tasks and then
external_deploy_tasks, the procedure for upgrades works
analogously. All happens within a single playbook, so variables or
fact overrides exported from the update/upgrade tasks will be
available to the deploy tasks during the update/upgrade procedure.
Partial-Bug: #1783949
Change-Id: Ib2474e8f69711cd6610a78884d5032ffd19ad249
"undercloud" host is too opinionated and hostnames can change. We should
rather apply the tasks to the Undercloud HostGroup, which contains one
host for now: the actual undercloud hostname.
So this patch switches "undercloud" to "Undercloud" so when the hostname
isn't "undercloud", the external tasks will run correctly on this host.
Change-Id: I7200f930387406e6cc8e6fee6d5278768074c892
Closes-Bug: #1784910
host_prep_tasks are run from deploy_steps_playbook.yaml, so there's no
need to also run them as part of the {{role}}HostPrepDeployment
resources.
Change-Id: If1bf6dda19e6e0b875463c421f9504efab85251b
Problem: RHEL and CentOS8 will deprecate the usage of Yum.
From DNF release note:
DNF is the next upcoming major version of yum, a package
manager for RPM-based Linux distributions.
It roughly maintains CLI compatibility with YUM and defines a strict API for
extensions.
Solution: Use "package" Ansible module instead of "yum".
"package" module is smarter when it comes to detect with package manager
runs on the system. The goal of this patch is to support both yum/dnf
(dnf will be the default in rhel/centos 8) from a single ansible module.
Change-Id: I8e67d6f053e8790fdd0eb52a42035dca3051999e
Deploy steps run the docker puppet steps with max of
a 3 processes. This takes like 30 min to finish the
containers configuration for a typical overcloud (in CI).
Double the numbers to allow more puppets finish threir
tasks sooner.
Change-Id: Id0b0371e7f21f56528027921732ade786525d659
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
include module is deprecated. The alternate is to use import_tasks
for static file tasks inclusion and include_tasks for dynamic tasks
file inclusions (like using with_items).
Change-Id: I8b3bf3ba3d7c2cfbe1187218c51f619e65efe0e5
We drop the post_update_steps_playbook, and execute post_update_tasks
as part of the update_steps_playbook. This will ensure that
post_update_tasks are executed, and they're executed in accordance
with the `serial: 1` ordering that update_steps_playbook is using. (We
want to avoid an issue similar to what we've had in bug #1776206).
Change-Id: I15a984172cd5532bc966269d8c68f27b5703733e
Closes-Bug: #1778471
The ansible command generated in ansible-playbook-command.sh by default
have "--become" in it.
This commit removes "become: true" where is used to avoid confusion in
deploy steps. Today we explicitly set "become: false" in deploy-steps.j2
for certain actions, so there's no meaning of having also "become: true"
for the other ones.
We have a release note [1] that explains why the "become" was
introduces, but maybe we can revisit it.
[1] releasenotes/notes/use-become-true-in-deploy-steps-playbook-01decb18d895879f.yaml
Change-Id: Ic666b4ecaecf0591dd8bb0406f239649b20b9623
We should re-run host_prep_tasks as part of the minor update, to make
sure the host is ready for starting the updated containers. The right
place for them is between update tasks and deployment tasks.
This is important in case we deliver changes to host_prep_tasks during
minor update, or if update_tasks do something that would partially
undo the host preparation, e.g. clear/delete some directories on the
host to get rid of previous state.
Change-Id: Ic0a905a8c4691cbe75113131bd84e8a39dea046d
Related-Bug: #1776206
In order to make the deployment more flexible, we should allow for the
ansible hosts to be configurable from the old undercloud/overcloud
concepts. Rather than assume 'undercloud'/'overcloud', we should allow
for these to include the same set of hosts. This change introduces
'deployment_source_hosts' and 'deployment_target_hosts' variables that
can be used to control where the tasks are run on.
Change-Id: I249cc7e179bc1423788aab967c4b2e3f9ffc81d4
Related-Blueprint: all-in-one