The new master branch should point now to rocky.
So, HOT templates should specify that they might contain features
for rocky release [1]
Also, this submission updates the yaml validation to use only latest
heat_version alias. There are cases in which we will need to set
the version for specific templates i.e. mixed versions, so there
is added a variable to assign specific templates to specific heat_version
aliases, avoiding the introductions of error by bulk replacing the
the old version in new releases.
[1]: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#rocky
Change-Id: Ib17526d9cc453516d99d4659ee5fa51a5aa7fb4b
Instead of using host_prep_tasks (which are part of deployment tasks),
we'll use the upgrade tasks that are now well known and tested in
previous releases, when the we containerized the overcloud.
Depends-On: Id25e6280b4b4f060d5e3f78a50ff83aaca9e6b1a
Change-Id: Ic199c7d431e155e2d37996acd0d7b924d14af2b7
Ansible yum module installs all packages available in the repo
if you use asterix. We instead will use yum -y update name*.
Change-Id: I8e71367ae91faa06313711c6a954c61af705fd8f
Resolves: rhbz#1549845
Using host_prep_tasks interface to handle undercloud teardown before we
run the undercloud install.
The reason of not using upgrade_tasks is because the existing tasks were
created for the overcloud upgrade first and there are too much logic
right now so we can easily re-use the bits for the undercloud. In the
future, we'll probably use upgrade_tasks for both the undercloud and
overcloud but right now this is not possible and a simple way to move
forward was to implement these tasks that work fine for the undercloud
containerization case.
Workflow will be:
- Services will be stopped and disabled (except mariadb)
- Neutron DB will be renamed, then mariadb stopped & disabled
- Remove cron jobs
- All packages will be upgraded with yum update.
Change-Id: I36be7f398dcd91e332687c6222b3ccbb9cd74ad2
We should try to disable pacemaker resource and shut down services
in step 1 and make sure we check running services only once.
Change-Id: I5676132be477695838c59a0d59c62e09e335a8f0
This patch enables health check execution for cinder-api docker container.
Change-Id: I4c51f8b7260eb04cd8a8c502ed1b80df4f025880
Depends-On: Ib82cb849540694106a869ec81694f1159967ee79
fast_forward_upgrade_tasks for Glance covering Ocata and Pike.
- Service status check
- Stop services when updating from Ocata to Pike
- Update cinder packages
- Db sync
Resolves: rhbz#1536010
Closes-Bug: #1744056
bp fast-forward-upgrades
Change-Id: I172c3a1868a8b7a94b282cbe5c2f6b323f7ca101
If we use variables defined in later step in conditional before
checking which step are we on we will fail.
Resolves: rhbz#1535457
Closes-Bug: #1743764
Change-Id: Ic21f6eb5c4101f230fa894cd0829a11e2f0ef39b
cinder_api_db_sync does not use the database settings which are
generated by the puppet_config step. Consequently, it loses some
important client settings for accessing the DB, e.g. it breaks
when TLS everywhere is enabled.
Bind mount the tripleo.cnf file to expose the proper DB settings.
Change-Id: I17f3304d546eeb78803b4a3cc859255bfb3f71eb
Closes-Bug: #1746491
This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
Step config is only required within the puppet_configs section
of docker/services/*. This patch drops the top level 'step_config'
and updates the unit tests accordingly.
Change-Id: I7dc7cfae3ef1965ec95b1d9ef23e7f162418c034
This should help operators find the new log files. We do have them
documented, but not everybody reads every word in the docs :)
The readme creation has ignore_errors: true so that if the directory
isn't present at all (e.g. on deployed server environments, which
don't have openstack packages installed), we don't fail the deployment
when we're not able to create the readme.
Change-Id: I6b36db7b7ce8b3e4da566eb7828d0c3b8646a14f
Partial-Bug: #1730957
Docker services are missing the pre-upgrade validation task
in the upgrade_tasks section which verifies if the service
is running before stopping it.
Change-Id: I3d0b68eaf11b78f3422e026710d062f7e9455508
Partial-Bug: #1704389
Adds a UpgradeRemoveUnusedPackages param to use
in the ansible when conditional for the removal
Adds package removal to step2 right after a service
is stopped and disabled on step2. Package updates
happen in step3 so ideally remove before that.
The package removal task has ignore_errors true
so dependencies or other issue removing packages will
not fail the upgrade workflow.
Also adds this to the upgrade environment files
for visibility and defaulting false
Change-Id: Ie4e4a2d41f7752c5a13507a7c15c6f68e203cfca
Related-Bug: 1701501
We were setting them in the Dockerfile's previously. However this
caused the healtcheck commands to always run regardless of which
process we were running in the container. This caused 'unhealthy'
containers at times they were never intended to be checked. This
change makes it so they are explicitly set.
Change-Id: I7bc12d236b3cc7a52d3e6aa706fd04675dad3a9a
The services that docker depends on, have logging_sources and logging_groups;
but those are not set on the docker outputs so they are not used when dockers
are deployed.
Added logging_source & logging_groups as docker optional parameters in
tools/yaml-validate.py
Closes-Bug: #1718110
Change-Id: I8795eaf4bd06051e9b94aa50450dee0d8761e526
In 59e29b17f4a9f5f65b6f8a7b8e82ef6426d8a51 we forgot to
add tags to the Ansible tasks to remove the baremetal
cron jobs at step 2.
Change-Id: I23fb134b88336ebc4eb1a97a69a2d73d4ef0edb2
Related-bug: #1708466
The docker _cron services show up as (unhealthy) due to
them sharing the containers for the OpenStack services.
As such we need to manually override the health checks
for these services. By setting them to /bin/true
the services should show up has healthy.
Change-Id: I46e12bcec226fbe2768c7fe8f0e7719df46401a9
Closes-bug: #1713183
The cron containers need to run as root in order to create PID files
correctly.
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
revisit this.
Change-Id: Ib8fb2bef9a3bb89131265051e9ea304525b58374
Related-bug: 1707785
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
The cinder db purge cron job is created by puppet in
/var/spool/cron/cinder. This creates a cron container to run
that in an environment where it has access to cinder.conf
and the cinder-manage binaries.
Change-Id: I02ae32a6dcd8569e2e2390063d4d935d05545a78
Partial-bug: #1701254
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
In many occasions we had log directory initialization containers
without `detach: false`, which didn't guarantee that they'll finish
before the container depending on them will start using the log
directory.
This is now fixed by moving the initialization container one global
step earlier, so that we can keep the concurrency when creating the
log dirs. (Using `detach: false` makes paunch handle just one
container at a time, and as such it can have negative performance
impact.)
For services which have their container(s) starting in step_1,
initialization cannot be moved to an earlier step, so the solution
here was to just add `detach: false`.
As a minor related change, cinder DB sync container now mounts the log
directory from host to put cinder-manage.log into the expected
location.
Change-Id: I1340de4f68dd32c2412d9385cf3a8ca202b48556
Adds docker services for Cinder API and Scheduler.
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Alan Bishop <abishop@redhat.com>
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I5cff9587626a3b2a147e03146d5268242d1c9658
Partial-bug: #1668920