198 Commits

Author SHA1 Message Date
Lee Yarwood
acb2475e4c ffu: Add fast-forward upgrade outputs to RoleConfig
As outlined in the spec, fast-forward upgrades aim to take an
environment from an initial release of N to a release of N>=2, beyond
that of the traditionally supported N+1 upgrade path provided today by
many OpenStack projects.

For TripleO the first phase of this upgrade will be to move the
environment to the release prior to the target release. This will be
achieved by disabling all OpenStack control plane services and then
preforming the minimum number of steps required to upgrade each service
through each release until finally reaching the target release.

This change introduces the framework for this phase of the fast-forward
upgrades by adding playbooks and task files as outputs to RoleConfig.

- fast_forward_upgrade_playbook.yaml

This is the top level play and acts as the outer loop of the process,
iterating through the required releases as set by the
FastForwardUpgradeReleases parameter for the fast-forward section of the
upgrade. This currently defaults to Ocata and Pike for Queens.

Note that this play is run against the overcloud host group and it is
currently assumed that the inventory used to run this play is provided
by the tripleo-ansible-inventory command.

- fast_forward_upgrade_release_tasks.yaml

This output simply imports the top level prep and bootstrap task files.

- fast_forward_upgrade_prep_tasks.yaml
- fast_forward_upgrade_bootstrap_tasks.yaml

These outputs act as the inner loop for the fast-forward upgrade phase,
iterating over step values while importing their associated role tasks.

As prep tasks are carried out first for each release we loop over step
values starting at 0 and ending at the defined
fast_forward_upgrade_prep_steps_max, currently 3.

Following this we then complete the bootstrap tasks for each release,
looping over steps values starting at
fast_forward_upgrade_prep_steps_max + 1 , currently 4 and ending at
fast_forward_upgrade_steps_max,currently 9.

- fast_forward_upgrade_prep_role_tasks.yaml
- fast_forward_upgrade_bootstrap_role_tasks.yaml

These outputs then finally import the fast_forward_upgrade_tasks files
generated by the FastForwardUpgradeTasks YAQL query for each role. For
prep tasks these are always included when on an Ansible host of a given
role. This differs from bootstrap tasks that are only included for the
first host associated with a given role.

This will result in the following order of task imports with their
associated value of release and step:

fast_forward_upgrade_playbook
\_fast_forward_upgrade_release_tasks
  \_fast_forward_upgrade_prep_tasks              - release=ocata
     \_fast_forward_upgrade_prep_role_tasks      - release=ocata
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=0
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=0
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=1
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=1
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=2
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=2
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=3
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=3
  \_fast_forward_upgrade_bootstrap_tasks         - release=ocata
     \_fast_forward_upgrade_bootstrap_role_tasks - release=ocata
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=4
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=4
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=5
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=5
       \_$roleA/fast_forward_upgrade_tasks       - release=ocata, step=N
       \_$roleB/fast_forward_upgrade_tasks       - release=ocata, step=N
\_fast_forward_upgrade_release_tasks
  \_fast_forward_upgrade_prep_tasks              - release=pike
     \_fast_forward_upgrade_prep_role_tasks      - release=pike
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=0
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=0
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=1
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=1
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=2
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=2
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=3
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=3
   \_fast_forward_upgrade_bootstrap_tasks        - release=pike
     \_fast_forward_upgrade_bootstrap_role_tasks - release=pike
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=4
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=4
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=5
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=5
       \_$roleA/fast_forward_upgrade_tasks       - release=pike, step=N
       \_$roleB/fast_forward_upgrade_tasks       - release=pike, step=N

bp fast-forward-upgrades
Change-Id: Ie2683fd7b81167abe724a7b9245bf85a0a87ad1d
2018-02-09 17:13:31 +01:00
Zuul
71accf3415 Merge "Add {{step}} var to Task name" 2018-02-02 20:12:46 +00:00
Zuul
541adb47d1 Merge "Make sure deploy_steps_max is defined for update playbook" 2018-01-31 15:19:57 +00:00
Sofer Athlan-Guyot
0f2f51d0e7 Fix hardcoded dependency for ExtraConfigPost.
Remove the hardcoded value to the right variable.  Note you need the
-1 because the loop is done with jinja ‘range’ function which goes from
(start, finish-1)[1].

[1] http://jinja.pocoo.org/docs/2.10/templates/#range

Change-Id: I01c8d2521c0ee249b94e9cdc1b895da1523c50a3
2018-01-31 11:02:03 +00:00
Zuul
cc4ec7caff Merge "Upgrade ODL" 2018-01-31 03:19:14 +00:00
Jiri Stransky
9ec8c8f8af Make sure deploy_steps_max is defined for update playbook
Not having deploy_steps_max failed the update. Also adding
non-repetitive facts gathering.

Change-Id: I1848dc47266a35a0ba383e55787c4aea986bd7a9
Closes-Bug: #1746306
2018-01-30 18:52:03 +01:00
Janki Chhatbar
886b815509 Upgrade ODL
Major upgrade (Q -> R) is complex in ODL. There are multiple components
involved.

This patch enables major upgrade of ODL. Steps involved are:
1. Block OVS instances to connect to ODL
2. Set ODL upgrade flag to True
3. Start ODL
4. Start Neutron re-sync and wait for it to finish
5. Delete OVS groups and ports
6. Stop OVS
7. Unblock OVS ports
8. Start OVS
9. Unset ODL upgrade flag

Change-Id: Icf98a1215900762a0677aabee1cccbf1d130e5bd
2018-01-30 10:20:55 +00:00
Zuul
9521292630 Merge "Pass blacklisted_{ip_addresses,hostnames} to major_upgrade_steps" 2018-01-27 02:50:59 +00:00
Zuul
95489c170d Merge "Add post_upgrade_tasks with post_upgrade_steps_playbook output" 2018-01-27 02:50:55 +00:00
Zuul
0662d10c7b Merge "Add tag "always" to the inclusion of global_vars.yaml" 2018-01-26 02:03:04 +00:00
Zuul
502cb77479 Merge "Move step 1 preparation to deploy-steps-tasks.yaml" 2018-01-26 01:53:18 +00:00
James Slagle
ba0719c1b7 Add {{step}} var to Task name
Add the {{step}} var to a couple of task names from
deploy-steps-tasks.yaml where it was missing. Makes the output a bit
more consistent and user friendly.

Change-Id: I0a1b3f7f62543107b2f82ee57d75e65ecc7e02d4
2018-01-25 16:42:07 -05:00
marios
86e3cf22ef Add post_upgrade_tasks with post_upgrade_steps_playbook output
This wires up the post_upgrade_tasks tasks to be written as
ansible playbooks like the upgrade/update_tasks. This will
write out a post_upgrade_steps_playbook ansible playbook.

Used in https://review.openstack.org/#/c/489201/ by ODL
and https://review.openstack.org/#/c/505603/ cinder-volume
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow

Change-Id: Ib6188c91076eabf20d6e358ca247fed802369a7f
2018-01-25 16:44:48 +00:00
Giulio Fidente
a592cc05bf Pass blacklisted_{ip_addresses,hostnames} to major_upgrade_steps
Durig upgrade PostDeploySteps is mapped into major_upgrade_steps
which was missing blacklisted_{ip_addresses,hostnames} previously
added only into deploy-steps.

Change-Id: Ifdcdad63e430972f7254f0c40e021b00333fdf56
Closes-Bug: 1745379
2018-01-25 14:59:25 +01:00
Juan Antonio Osorio Robles
d3e053cf4f Add tag "always" to the inclusion of global_vars.yaml
This file contains the max deploy steps.

Change-Id: I9217bfb696aeaf9c027915fe93f60f33b7b95930
2018-01-25 08:08:59 +02:00
Martin André
997f4f6c2b Fix server post deploy step with config-download
Due to a misplaced endfor in the j2 template, the deployments.yaml
playbook was only included for the first role. The bogus task playbook
was rendered as:

  - hosts: overcloud
    name: Server Post Deployments
    gather_facts: no
    any_errors_fatal: yes
    tasks:
      - include: Controller/deployments.yaml
        vars:
          force: false
        when: role_name == 'Controller'
        with_items: "{{ Controller_post_deployments|default([]) }}"
    tags:
      - overcloud
      - post_deploy_steps
      - include: Compute/deployments.yaml
        vars:
          force: false
        when: role_name == 'Compute'
        with_items: "{{ Compute_post_deployments|default([]) }}"
    tags:
      - overcloud
      - post_deploy_steps

Change-Id: I625fcaa7c4dcb4f99f30b9d6def293154f4eb7ec
2018-01-19 17:32:13 +01:00
Steven Hardy
a2a0ba9300 Move step 1 preparation to deploy-steps-tasks.yaml
This moves the writing of various files that are consumed by the
tasks in deploy-steps-tasks.yaml, hopefully this is clearer, and
it also means we can drive the creation of these files via ansible
directly using https://review.openstack.org/528354

Change-Id: I173d22ebcbc986cefdef47f81298abe10ce8591b
2018-01-18 11:18:01 +00:00
Zuul
d5ddb1d860 Merge "Default empty map for docker_config steps" 2018-01-17 19:04:23 +00:00
Zuul
01345362d1 Merge "Also pass blacklisted hostnames" 2018-01-17 19:04:20 +00:00
rabi
2e7b195c03 Include common_deploy_steps_tasks.yaml
It seems to be changed in 0524c8635357d5617cc00d945d796d8f7d05c853,
but the update playbook still includes the old one.

Change-Id: Ie75c485f6739b9520d1a64ae28a6dd260c4d601c
Closes-Bug: #1743760
2018-01-17 17:47:37 +05:30
Zuul
08f9693bb5 Merge "Add become: true for host_prep_tasks" 2018-01-16 15:34:54 +00:00
Steven Hardy
41988eab39 Default empty map for docker_config steps
In the event a step has no services defined, we must still write the
config, as this is needed if services are disabled on update such that
a step becomes empty - we must run paunch on every step or the cleanup
of the "old" services does not happen.

Closes-Bug: 1742915
Change-Id: Iee01002f56b5311560557f2bf6f053601b9d43d7
2018-01-16 09:22:39 +00:00
Zuul
33a254cacb Merge "Reinstate common overcloud manifest for all roles" 2018-01-16 03:42:55 +00:00
Steven Hardy
9664b3b2e8 Add become: true for host_prep_tasks
I561b5ef6dee0ee7cac67ba798eda284fb7f7a8d0 added this for the main
deploy steps, but there are some host_prep_tasks which require this,
specifically the nova-libvirt tasks fail for me locally without -b

Change-Id: I29cb8b0962c0dfcf7950d65305a3adef1f1268c3
2018-01-15 18:31:46 +00:00
James Slagle
d4a5876e57 Also pass blacklisted hostnames
Workflows may need access to the list of blacklisted hostnames so they
can filter on that value. This change adds that input to the workflow
execution environment.

Change-Id: I41de32b324a406633699d17933ae05417b28c57b
Partial-Bug: #1743046
2018-01-15 15:26:11 +01:00
James Slagle
79570ed2b9 Workflow execution blacklist support
Workflows triggered from deploy-steps.j2 were not honoring the
blacklist, particularly ceph-ansible. This patch starts to address that
issue by passing in a list of blacklisted ip addresses to the workflow
execution environment that the workflow can make use of to filter
against ctlplane_service_ips.

Change-Id: Ic158171c629e82892e480f1e6903a67457f86064
Partial-Bug: #1743046
2018-01-15 15:25:49 +01:00
Steven Hardy
bb9fd2c61a Reinstate common overcloud manifest for all roles
This was lost in the translation to ansible, but it's needed to
enable existing interfaces such as hiera includes via *ExtraConfig.

For reference this problem was introduced in:
I674a4d9d2c77d1f6fbdb0996f6c9321848e32662 and this fix only considers
returning the behaviour prior to that patch (for the baremetal puppet
apply), further discussion is required on if/how this could be applied
to the new container architecture.

Change-Id: I0384edb23eed336b95ffe6293fe7d4248447e849
Partial-Bug: #1742663
2018-01-11 18:42:45 +00:00
marios
0c76a2acbf Start step at 0 for update_ + upgrade_steps_playbook
In I93dc8b4cefbd729ba7afa3a4d81b4ac95344cac2 for bug 1717292 the
step variable passed into the update_steps_playbook and the
upgrade_steps_playbook outputs is set to start at sequence 1. This
means that any upgrade or update_tasks that are expected to run in
step 0 will never be executed.

Change-Id: Ic2dab617269d47c4ea028cb35cdba2068a467ff9
Closes-Bug: 1741926
2018-01-08 16:31:40 +00:00
Emilien Macchi
eb324768d0 puppet apply: add --summarize
... so we can know how long take resources configuration in Puppet
catalogs, and more easily debug why we have timeouts.

Change-Id: If3fae8837140caae91120e46b4880146ffe22afc
2018-01-04 09:37:46 -08:00
Emilien Macchi
c45a8a462a deploy-steps.j2: use ansible to bootstrap environment
We introduced a new Ansible role, tripleo-bootstrap:
I560273be2ebe3a49ff37e3682222706939e5d879

This role will take care of preparing an environment that will deploy
TripleO later.
This patch aims to use execute this new role on all hosts.
We don't gather_facts, (already gathered previously), we want to fail on
any error and also the role will be executed on pre_deploy_steps tag.

Change-Id: If9306473701a340d577bbe0a4a7dfee90be99c2f
Depends-On: I560273be2ebe3a49ff37e3682222706939e5d879
2017-12-07 00:08:33 +00:00
Zuul
71a2faab24 Merge "Add deploy_steps_tasks interface" 2017-12-05 21:49:44 +00:00
Zuul
410027d64f Merge "Add name property where missing" 2017-12-05 18:07:49 +00:00
Steven Hardy
0524c86353 Add deploy_steps_tasks interface
This allows per-step ansible tasks to be run on the nodes when using
the config download mechanism, e.g adding the following to a service
template will create /tmp/testperstep populated for every step.

     deploy_steps_tasks:
        - name: Test something happens each step
          lineinfile:
            path: /tmp/testperstep
            create: true
            line: "{{step}} step happened"

Change-Id: Ic34f5c48736b6340a1cfcea614b05e33e2fce040
2017-12-05 08:47:48 +02:00
James Slagle
7a3fc67559 Add name property where missing
All SoftwareDeployment resources should use the name property when using
config-download.

This also adds a validation to check that the name property is set in
yaml-validate.py

Change-Id: I621e282a2e2c041a0701da0296881c615f0bfda4
Closes-Bug: #1733586
2017-12-04 18:01:52 -05:00
Zuul
fa79c09ee5 Merge "Swap the order of stdout and stderr in debug output" 2017-11-28 22:53:30 +00:00
Zuul
3d3169e473 Merge "Select first node as bootstrap node not using name" 2017-11-28 16:50:33 +00:00
Ben Nemec
5595e7fc14 Swap the order of stdout and stderr in debug output
Generally this data is looked at because something failed, and in
that case the relevant error is likely to be at the end of stderr.
By concatenating the output stderr first and then stdout as we were
it is possible for the stderr to get lost entirely in the failures
list, and even if that doesn't happen it's best to output the
relevant error right at the end of the output where people will
see it.  Previously it would be buried in the middle of the debug
output.

Change-Id: I952fd1af5778ade1eb6b0599d983f98cadeb7f6f
2017-11-28 09:09:32 -06:00
Zuul
897a03f0ad Merge "Allow empty list of enabled_roles" 2017-11-28 11:29:21 +00:00
Steven Hardy
a460a093c7 Select first node as bootstrap node not using name
This fixes a regression which reintroduced bug #1640449 because
we hard-code the node index/name instead of sorting the map of servers

Change-Id: Iaffc66a41edf176dde3b5adf603a9cff6db7aa24
Closes-Bug: #1724888
2017-11-28 09:01:19 +00:00
Steven Hardy
30d602c999 Allow empty list of enabled_roles
This is the case when creating a compute-only stack with the default roles data

E.g:

openstack overcloud roles generate --roles-path tripleo-heat-templates/roles -o compute_only_roles_data.yaml Compute
openstack overcloud deploy --templates tripleo-heat-templates --stack compute1 -r compute_only_roles_data.yaml

Change-Id: I44e6450c1afe8fb972731e46b40fc2e63b320a5b
2017-11-23 11:56:07 +00:00
Carlos Camacho
927495fe3d Change template names to queens
The new master branch should point now to queens instead of pike.

So, HOT templates should specify that they might contain features
for queens release [1]

[1]: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#queens

Change-Id: I7654d1c59db0c4508a9d7045f452612d22493004
2017-11-23 10:15:32 +01:00
Steven Hardy
9ce86956ff Add external_post_deploy_tasks interface
This adds another interface like external_deploy_tasks, but instead
of running on each deploy step, the tasks are run after the deploy
is completed, so it's useful for per-service bootstrapping such
as is under development for octavia in:

https://review.openstack.org/#/c/508195
https://review.openstack.org/#/c/515402/

These reviews could potentially be reworked to use this interface,
which would avoid the issue where the configuration needs to happen
after all the openstack services are deployed and configured.

As an example, here is how you could create a temp file post deploy:

    external_post_deploy_tasks:
        - name: Test something happens post-deploy
          copy:
            dest: /tmp/debugpostdeploy
            content: "done"

Change-Id: Iff3190a7d5a238c8647a4ac474821aeda5f2b1f8
2017-11-22 18:39:05 +00:00
Zuul
344515dee0 Merge "Add tags to plays" 2017-11-21 14:51:01 +00:00
Zuul
aa1e3955fa Merge "Rename Undercloud->External deployment" 2017-11-21 14:50:59 +00:00
Michele Baldessari
ed2b957a4f Fix all outputs|failed and outputs is defined
The ansible "failed_when" filter that uses a registered output
of a previous task piped to the '|failed' filter does not work
as expected. Given the following playbook:

  - name: return code
    shell: |
      echo "fail 2"
      exit 2
    failed_when: false
    log_when: false
    register: outputs
  - debug:
      msg: "rc: {{ outputs.rc }}"
  - debug: msg="Broken (does not fail as expected)"
    when: outputs is defined
    failed_when: outputs|failed
  - debug: msg="Working (fails as expected)"
    when: outputs is defined
    failed_when: outputs.rc != 0

We obtain the following output:

TASK [return code] ****
changed: [localhost]

TASK [debug] **********
ok: [localhost] => {
    "msg": "rc: 2"
}

TASK [debug] **********
ok: [localhost] => {
    "failed_when_result": false,
    "msg": "Broken (does not fail as expected)"
}

TASK [debug] **********
fatal: [localhost]: FAILED! => {
    "failed_when_result": true,
    "msg": "Working (fails as expected)"
}

This means that the 'outputs|failed' just does not work at all.
Let's move to a more explicit check on the rc code of the registered
variable.

We also need to fix all the "outputs is defined" checks, because
when a task is skipped the registered outputs variable *is* actually
defined as the following dictionary:
{'skip_reason': u'Conditional result was False', 'skipped': True, 'changed': False}

So we use "outputs.rc is defined" in order to make sure that the
previous task did indeed run.

Closes-Bug: #1733402

Change-Id: I6ef53dc3f9aede42f10c7f110d24722355481261
2017-11-21 08:06:41 +01:00
Giulio Fidente
f890e4e512 Revert "Revert "Tag workflows created by the templates""
Also touches ceph-ansible/ceph-base.yaml to make sure this is
tested by scenario001

Change-Id: I7a7beea36669a79662f384315a3fbd19c958de8a
Related-Bug: #1715389
2017-11-15 17:24:51 +00:00
Oliver Walsh
61fcfca045 Refactor cellv2 host discovery logic to avoid races
The compute service list is polled until all expected hosts are reported or a
timeout occurs (600s).

Adds a cellv2_discovery flag to puppet services. Used to generate a list of
hosts that should have cellv2 host mappings.

Adds a canonical fqdn and that should match the fqdn reported by a host.

Adds the ability to upload a config script for docker config instead of using
complex bash on-liners.

Closes-bug: 1720821
Change-Id: I33e2f296526c957cb5f96dff19682a4e60c6a0f0
2017-11-08 23:20:46 +00:00
James Slagle
659d23a506 Add tags to plays
Adds various tags to the deploy_steps_playbook output.

- facts: Run fact gathering
- overcloud: Run all plays for overcloud deployment
- pre_deploy_steps: Run deployments that happen pre deploy_steps
- host_prep_steps: Run host_prep_tasks
- deploy_steps: Run deploy_steps
- post_deploy_steps: Run deployments that happen post deploy_steps
- external: Run all external deployments
- external_deploy_steps: Run all external deployments

external and external_deploy_steps are the same for now. However, I kept
them both b/c "external" is the parallel for "overcloud", and
"external_deploy_steps" is the parallel for "deploy_steps".

Also fixes the name of the host_prep_tasks play that was incorrectly
named "Deployment steps".

Change-Id: Ic6579d187035d00edc1f9190f8ebd12d4400d919
2017-11-08 10:54:31 -05:00
James Slagle
266c6f124a Rename Undercloud->External deployment
These plays are better named External deployment instead of Undercloud
deployment, as they aren't actually deploying the Undercloud. It's
confusing to see "Undercloud deployment" and "Overcloud deployment"
tasks in the same output when the undercloud is already deployed.

As the tasks are driving external deployments, and that's what the Heat
output is called (external_deploy_steps_tasks), this renames them to
External deployment.

Change-Id: I685d16d8b3dc5e0d59955ae4c8fac7410168d083
2017-11-08 10:35:39 -05:00
Zuul
fb3a378b61 Merge "Set become:false for undercloud plays" 2017-11-08 15:27:25 +00:00