These tasks would run before any individual server deployments. A
specific use case is for rebooting dpdk/nfv nodes before applying
NetworkDeployment, etc.
Change-Id: I9e410def25184e635568db67149264ac89c999ed
set_fact logs the fact value. In the case of reading the role_data_*
files, this is very verbose as the files can be large. Use no_log: True
to make these tasks less verbose. The content is saved in the
config-download output already, so no useful info is lost.
Change-Id: Ie6f75113194961628a0c9bdfbfbf5d88a18059eb
Closes-Bug: #1760996
Add blank lines between the Ansible tasks and plays in the stack
outputs. This is an improvement in readability for the user.
Change-Id: I52ebd9081cacf213ac29f1d24e73db6ea5cfe33f
In I75f087dc456c50327c3b4ad98a1f89a7e012dc68 we removed much of
the legacy upgrade workflow. This now also removes the
disable_upgrade_deployment flag and the tripleo_upgrade_node.sh
script, both of which are no longer used and have no effect on
the upgrade.
Related reviews
I7b19c5299d6d60a96a73cafaf0d7103c3bd7939d tripleo-common
I4227f82168271089ae32cbb1f318d4a84e278cc7 python-tripleoclient
Change-Id: Ib340376ee80ea42a732a51d0c195b048ca0440ac
Add support for the SshKnownHostsDeployment resources to
config-download. Since the deployment resources relied on Heat outputs,
they were not supported with the default handling from tripleo-common
that relies on the group_vars mechanism.
Instead, this patch refactors the templates to add the known hosts
entries as global_vars to deploy_steps_playbook.yaml, and then includes
the new tripleo-ssh-known-hosts role from tripleo-common to apply the
same configuration that the Heat deployment did.
Since these deployments no longer need to be triggered when including
config-download-environment.yaml, a mapping is added that can be
overridden to OS::Heat::None to disable the deployment resources when
using config-download.
The default behavior when not using config-download remains unchanged.
Closes-Bug: #1746336
Change-Id: Ia334fe6adc9a8ab228f75cb1d0c441c1344e2bd9
The resultin pre_upgrade_rolling_steps_playbook will be executed in a
node-by-node rolling fashion at the beginning of major upgrade
workflow (before upgrade_steps_playbook).
The current intended use case is special handling of L3 agent upgrade
when moving Neutron services into containers. Special care needs to be
taken in this case to preserve L3 connectivity of instances (with
regard to dnsmasq and keepalived sub-processes of L3 agent).
The playbook can be run before the main upgrade like this:
openstack overcloud upgrade run --roles overcloud --playbook pre_upgrade_rolling_steps_playbook.yaml
Partial-Bug: #1738768
Change-Id: Icb830f8500bb80fd15036e88fcd314bf2c54445d
Implements: blueprint major-upgrade-workflow
In last step of FFU we need to swich repos before running upgrade.
We do so by introducing post FFU steps and running the switch in
them. We also update heat agents and os-collect-config on nodes.
Change-Id: I649afc6fa384ae21edc5bc917f8bb586350e5d47
This wires in a heat parameter that can be used to disable the
baremetal (Puppet) deployment tasks. Useful for testing
some lightweight/containers only deployments.
Change-Id: I376418c618616b7755fafefa80fea8150cf16b99
Without the extra bool this when block gets evaluated as a string.
Given that it is always present this means enable_debug has been
enabled regardless of the end user setting.
Change-Id: I9f53f3bca4a6862966e558ea20fe001eabda7bcf
Closes-bug: #1754481
Updating OpenStack (within release) means updating ODL from v1 to v1.1.
This is done by "openstack overcloud update" which collects
update_tasks. ODL needs 2 different steps to achieve this
minor update. These are called Level1 and Level2. L1 is
simple - stop ODL, update, start. This is taken care by paunch
and no separate implementation is needed. L2 has extra steps
which are implemented in update_tasks and post_update_tasks.
Updating ODL within the same major release (1->1.1) consists of either
L1 or L2 steps. These steps are decided from ODLUpdateLevel parameter
specified in environments/services-docker/update-odl.yaml.
Upgrading ODL to the next major release (1.1->2) requires
only the L2 steps. These are implemented as upgrade_tasks and
post_upgrade_tasks in https://review.openstack.org/489201.
Steps involved in level 2 update are
1. Block OVS instances to connect to ODL
2. Set ODL upgrade flag to True
3. Start ODL
4. Start Neutron re-sync and wait for it to finish
5. Delete OVS groups and ports
6. Stop OVS
7. Unblock OVS ports
8. Start OVS
9. Unset ODL upgrade flag
These steps are exactly same as upgrade_tasks.
The logic implemented is:
follow upgrade_tasks; when update_level == 2
Change-Id: Ie532800663dd24313a7350b5583a5080ddb796e7
In https://review.openstack.org/#/c/525260/, we moved the creation of
various RoleData driven config files to deploy-steps-tasks.yaml, and to
consume the values from various role_data_* variables that were written in
the inventory (see https://review.openstack.org/#/c/528354/).
However, we were already downloading and saving the RoleData to separate
files via config download. We should consume from those files instead of
the inventory. That has the advantage that one can quickly modify and
iterate on the local files, and have those changes applied. That is
harder to do when these values are in the inventory, and not possible to
do when using dynamic inventory.
Since the tasks will fail trying to read from the files when not using
config-download, conditional local_action tasks that use the stat module
first verify the existence of the files before attempting to read their
contents. If they don't exist, the values fall back to whatever has been
defined by the ansible variable.
Change-Id: Idfdce6f0a778b0a7f2fed17ff56d1a3e451868ab
Closes-Bug: #1749784
We make sure is_bootstrap_node is always set and we reset hiera
hierarchy on first run.
Resolves: rhbz#1535406
Clodes-Bug: #1743740
Change-Id: Ib5cc32b798118c85bf09beab097be2f6eaeb405f
This makes it clearer that the previous task failed, which isn't
immediately evident from the ansible task output due to the failed_when
on those tasks.
Change-Id: I765208d5865f6e5a292e5b52c572e2e79540c663
Closes-Bug: #1748443
This removes most of the Heat driver upgrade workflow, including
the script delivery and stepwise upgrade tasks with invocation
of ansible via heat
For Q upgrades the operator should use
openstack overcloud upgrade --init-upgrade --container-registry-file file
openstack overcloud upgrade --nodes Controller
etc.
Depends-On: I54f8fc57b758e34c620d607be15d2291d545ff6f
Change-Id: I75f087dc456c50327c3b4ad98a1f89a7e012dc68
As outlined in the spec, fast-forward upgrades aim to take an
environment from an initial release of N to a release of N>=2, beyond
that of the traditionally supported N+1 upgrade path provided today by
many OpenStack projects.
For TripleO the first phase of this upgrade will be to move the
environment to the release prior to the target release. This will be
achieved by disabling all OpenStack control plane services and then
preforming the minimum number of steps required to upgrade each service
through each release until finally reaching the target release.
This change introduces the framework for this phase of the fast-forward
upgrades by adding playbooks and task files as outputs to RoleConfig.
- fast_forward_upgrade_playbook.yaml
This is the top level play and acts as the outer loop of the process,
iterating through the required releases as set by the
FastForwardUpgradeReleases parameter for the fast-forward section of the
upgrade. This currently defaults to Ocata and Pike for Queens.
Note that this play is run against the overcloud host group and it is
currently assumed that the inventory used to run this play is provided
by the tripleo-ansible-inventory command.
- fast_forward_upgrade_release_tasks.yaml
This output simply imports the top level prep and bootstrap task files.
- fast_forward_upgrade_prep_tasks.yaml
- fast_forward_upgrade_bootstrap_tasks.yaml
These outputs act as the inner loop for the fast-forward upgrade phase,
iterating over step values while importing their associated role tasks.
As prep tasks are carried out first for each release we loop over step
values starting at 0 and ending at the defined
fast_forward_upgrade_prep_steps_max, currently 3.
Following this we then complete the bootstrap tasks for each release,
looping over steps values starting at
fast_forward_upgrade_prep_steps_max + 1 , currently 4 and ending at
fast_forward_upgrade_steps_max,currently 9.
- fast_forward_upgrade_prep_role_tasks.yaml
- fast_forward_upgrade_bootstrap_role_tasks.yaml
These outputs then finally import the fast_forward_upgrade_tasks files
generated by the FastForwardUpgradeTasks YAQL query for each role. For
prep tasks these are always included when on an Ansible host of a given
role. This differs from bootstrap tasks that are only included for the
first host associated with a given role.
This will result in the following order of task imports with their
associated value of release and step:
fast_forward_upgrade_playbook
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=ocata
\_fast_forward_upgrade_prep_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=3
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=ocata
\_fast_forward_upgrade_bootstrap_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=N
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=N
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=pike
\_fast_forward_upgrade_prep_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=3
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=pike
\_fast_forward_upgrade_bootstrap_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=N
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=N
bp fast-forward-upgrades
Change-Id: Ie2683fd7b81167abe724a7b9245bf85a0a87ad1d
Remove the hardcoded value to the right variable. Note you need the
-1 because the loop is done with jinja ‘range’ function which goes from
(start, finish-1)[1].
[1] http://jinja.pocoo.org/docs/2.10/templates/#range
Change-Id: I01c8d2521c0ee249b94e9cdc1b895da1523c50a3
Not having deploy_steps_max failed the update. Also adding
non-repetitive facts gathering.
Change-Id: I1848dc47266a35a0ba383e55787c4aea986bd7a9
Closes-Bug: #1746306
Major upgrade (Q -> R) is complex in ODL. There are multiple components
involved.
This patch enables major upgrade of ODL. Steps involved are:
1. Block OVS instances to connect to ODL
2. Set ODL upgrade flag to True
3. Start ODL
4. Start Neutron re-sync and wait for it to finish
5. Delete OVS groups and ports
6. Stop OVS
7. Unblock OVS ports
8. Start OVS
9. Unset ODL upgrade flag
Change-Id: Icf98a1215900762a0677aabee1cccbf1d130e5bd
Add the {{step}} var to a couple of task names from
deploy-steps-tasks.yaml where it was missing. Makes the output a bit
more consistent and user friendly.
Change-Id: I0a1b3f7f62543107b2f82ee57d75e65ecc7e02d4
This wires up the post_upgrade_tasks tasks to be written as
ansible playbooks like the upgrade/update_tasks. This will
write out a post_upgrade_steps_playbook ansible playbook.
Used in https://review.openstack.org/#/c/489201/ by ODL
and https://review.openstack.org/#/c/505603/ cinder-volume
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
Change-Id: Ib6188c91076eabf20d6e358ca247fed802369a7f
Durig upgrade PostDeploySteps is mapped into major_upgrade_steps
which was missing blacklisted_{ip_addresses,hostnames} previously
added only into deploy-steps.
Change-Id: Ifdcdad63e430972f7254f0c40e021b00333fdf56
Closes-Bug: 1745379
Due to a misplaced endfor in the j2 template, the deployments.yaml
playbook was only included for the first role. The bogus task playbook
was rendered as:
- hosts: overcloud
name: Server Post Deployments
gather_facts: no
any_errors_fatal: yes
tasks:
- include: Controller/deployments.yaml
vars:
force: false
when: role_name == 'Controller'
with_items: "{{ Controller_post_deployments|default([]) }}"
tags:
- overcloud
- post_deploy_steps
- include: Compute/deployments.yaml
vars:
force: false
when: role_name == 'Compute'
with_items: "{{ Compute_post_deployments|default([]) }}"
tags:
- overcloud
- post_deploy_steps
Change-Id: I625fcaa7c4dcb4f99f30b9d6def293154f4eb7ec
This moves the writing of various files that are consumed by the
tasks in deploy-steps-tasks.yaml, hopefully this is clearer, and
it also means we can drive the creation of these files via ansible
directly using https://review.openstack.org/528354
Change-Id: I173d22ebcbc986cefdef47f81298abe10ce8591b
It seems to be changed in 0524c8635357d5617cc00d945d796d8f7d05c853,
but the update playbook still includes the old one.
Change-Id: Ie75c485f6739b9520d1a64ae28a6dd260c4d601c
Closes-Bug: #1743760
In the event a step has no services defined, we must still write the
config, as this is needed if services are disabled on update such that
a step becomes empty - we must run paunch on every step or the cleanup
of the "old" services does not happen.
Closes-Bug: 1742915
Change-Id: Iee01002f56b5311560557f2bf6f053601b9d43d7
I561b5ef6dee0ee7cac67ba798eda284fb7f7a8d0 added this for the main
deploy steps, but there are some host_prep_tasks which require this,
specifically the nova-libvirt tasks fail for me locally without -b
Change-Id: I29cb8b0962c0dfcf7950d65305a3adef1f1268c3
Workflows may need access to the list of blacklisted hostnames so they
can filter on that value. This change adds that input to the workflow
execution environment.
Change-Id: I41de32b324a406633699d17933ae05417b28c57b
Partial-Bug: #1743046
Workflows triggered from deploy-steps.j2 were not honoring the
blacklist, particularly ceph-ansible. This patch starts to address that
issue by passing in a list of blacklisted ip addresses to the workflow
execution environment that the workflow can make use of to filter
against ctlplane_service_ips.
Change-Id: Ic158171c629e82892e480f1e6903a67457f86064
Partial-Bug: #1743046