During FFWD Upgrade or Minor Update we should still make sure
we clear Upgrade params too.
Resolves: rhbz#1558787
Closes-Bug: 1770191
Change-Id: Id1eb4c3d93ae8f36adb8ab4fa2df570a6a76951f
Instead of using host_prep_tasks (which are part of deployment tasks),
we'll use the upgrade tasks that are now well known and tested in
previous releases, when the we containerized the overcloud.
Depends-On: Id25e6280b4b4f060d5e3f78a50ff83aaca9e6b1a
Change-Id: Ic199c7d431e155e2d37996acd0d7b924d14af2b7
As we discovered bug #1768586 we'll need to make sure that every
parameter tweak in plan is followed by a stack update.
So far the Ceph upgrade command did set param -> stack update -> unset
param (only in plan). However this means the last CephAnsiblePlaybook
setting (back to normal deploy playbook) was discarded.
Let's reuse normal converge commands to converge CephAnsiblePlaybook
too, and we can remove the (now unused) ceph-upgrade-converge.yaml. We
won't do more stack updates than necessary, and at the same time the
user workflow stays somewhat consistent between envs that do and don't
have Ceph.
An alternative would be to run the as part of the Ceph command, but
that either means we'd have to run one more stack update then
necessary, or skip the last converge in envs with Ceph, and
essentially diverge further from the non-Ceph workflow.
Change-Id: If596531cbb1e750ed67e66391743f4c1833e4337
Depends-On: I025eac40f8bda5f23c789e7fef1a9e9b49947f66
Partial-Bug: #1768586
Purpose is to ensure that any mapping previously used to enable
config-download is reset to perform a regular Heat stack update on
ceph-upgrade. We may need to do "update/upgrade/ffwd -> ceph ->
converge" instead of the previously assumed "update/upgrade/ffwd ->
converge -> ceph".
This also removes the no-op of DeploymentSteps -- we need them enabled
during Ceph upgrade as we need firewall rules applied.
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Closes-Bug: #1767318
Related-Bug: #1767317
Change-Id: I52312ffcd438c354872ab3c74138b47ae71aab4b
These will disable OS::TripleO::DeploymentSteps on prepare, to allow
for a Heat stack update without triggering a puppet apply and then
restore the Heat resource and the Ceph ansible playbook on converge.
Change-Id: Ie765b429c4cb36d9dd616584cc1d4f45184fa1b8
So far we haven't been disabling workflows for update/upgrade. We
should disable them by default as they could have the potential to
disrupt the update/upgrade/ffwd procedure.
The main example of a thing we deploy via the workflow resources is
Ceph. We decided no-opping ceph-ansible for the main
update/upgrade/ffwd upgrade steps is the safest path forward and we'll
update/upgrade Ceph it after the main procedure is finished.
Change-Id: I34c7213ab7b70963ad2e50f7633b665fad70bde5
FluentClient service has been renamed
into Fluentd [0] for queens. This patch
handles the disabling of the old FluentdClient
service.
[0] Idb9886f04d56ffc75a78c4059ff319b58b4acf9f
Change-Id: I085973f3d23fd78c16cba94a91692421956b301b
Closes-Bug: #1746493
This consolidates the upgrade and ffwd-upgrade related env files,
removing no longer relevant files (like converge vs converge-docker).
In line with recent/ongoing work in tripleoclient [1][2] we now have
cli: overcloud [upgrade|update|ffwd-upgrade] [prepare|run|converge]
With this patch we can also change the set/unset of resource 'noop'
and move it from tripleo-common to python-tripleoclient, like I am
pointing at in related client review below. If others agree then I
will do the same with the upgrade-prepare and also the ffwd cli
in [3], i.e. add explicit inclusion of the upgrade-prepare.yaml
and then similarly include the upgrade-converge.yaml for the
upgrade/ffwd-upgrade converge cli.
Related:
I1288fe68ae8af02a5d77390d237ec467d88e43d2 python-tripleoclient
[1] 96ffa3a325
[2] https://review.openstack.org/#/c/558536/5/tripleoclient/v1/overcloud_update.py
[3] https://review.openstack.org/#/c/557937/4/tripleoclient/v1/overcloud_ffwd_upgrade.py@72
Change-Id: Icfe494e3219d6d6cd3251f75bb4329fc4d793c3c
Using host_prep_tasks interface to handle undercloud teardown before we
run the undercloud install.
The reason of not using upgrade_tasks is because the existing tasks were
created for the overcloud upgrade first and there are too much logic
right now so we can easily re-use the bits for the undercloud. In the
future, we'll probably use upgrade_tasks for both the undercloud and
overcloud but right now this is not possible and a simple way to move
forward was to implement these tasks that work fine for the undercloud
containerization case.
Workflow will be:
- Services will be stopped and disabled (except mariadb)
- Neutron DB will be renamed, then mariadb stopped & disabled
- Remove cron jobs
- All packages will be upgraded with yum update.
Change-Id: I36be7f398dcd91e332687c6222b3ccbb9cd74ad2