Currently this assumes all tasks will run on the primary controller
but because of composable roles, that may not be the case.
An example is if you deploy keystone on any role other than the
role tagged primary e.g Controller by default, we don't create
any of the users/endpoints because the tasks aren't written to
the role unless keystone actually runs there.
Closes-Bug: #1792613
Change-Id: Ib6efd03584c95ed4ab997f614aa3178b01877b8c
If compute nodes are deployed without deploying/updating the controllers then
the computes will not have cellv2 mappings as this is run in the controller
deploy steps (nova-api).
This can happen if the controller nodes are blacklisted during a compute scale
out. It's also likely to be an issue going forward if the deployment is staged
(e.g split control plane).
This change moves the cell_v2 discovery logic to the nova-compute/nova-ironic
deploy step.
Closes-bug: 1786961
Change-Id: I12a02f636f31985bc1b71bff5b744d346286a95f
This change adds a stack output to services/common.yaml that acts
as an interface for Ansible group variables. Ansible vars provided
via this interface will be consumed by config-download and written
under $config-download-dir/group_vars/ where they can be accessed
by ansible commands.
Part of blueprint ansible-tasks-to-role.
Change-Id: Ib70e7dda13b4a3ed30af88906ba42c25cdc93038
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Iada64874432146ef311682f26af5990469790ed2
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Icc6b51044ccc826f5b629eb1abd3342813ed84c0
Composable service templates can now define external_update_tasks and
external_upgrade_tasks. They are meant for update/upgrade logic of
services deployed via external_deploy_tasks. The external update
playbook first executes external_update_tasks and then
external_deploy_tasks, the procedure for upgrades works
analogously. All happens within a single playbook, so variables or
fact overrides exported from the update/upgrade tasks will be
available to the deploy tasks during the update/upgrade procedure.
Partial-Bug: #1783949
Change-Id: Ib2474e8f69711cd6610a78884d5032ffd19ad249
This should merge multiple and different values for the same key
found in different services.
For example, assuming two services defining a key as follows:
config_settings:
mykey:
- val1
config_settings:
mykey:
- val2
- val3
the content of the key, as seen by ansible or puppet on the nodes,
will be:
mykey: ['val1','val2','val3']
Change-Id: I190374e36ad1a2b57611a3a9d0a52ceb1a049aff
The new master branch should point now to rocky.
So, HOT templates should specify that they might contain features
for rocky release [1]
Also, this submission updates the yaml validation to use only latest
heat_version alias. There are cases in which we will need to set
the version for specific templates i.e. mixed versions, so there
is added a variable to assign specific templates to specific heat_version
aliases, avoiding the introductions of error by bulk replacing the
the old version in new releases.
[1]: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#rocky
Change-Id: Ib17526d9cc453516d99d4659ee5fa51a5aa7fb4b
The resultin pre_upgrade_rolling_steps_playbook will be executed in a
node-by-node rolling fashion at the beginning of major upgrade
workflow (before upgrade_steps_playbook).
The current intended use case is special handling of L3 agent upgrade
when moving Neutron services into containers. Special care needs to be
taken in this case to preserve L3 connectivity of instances (with
regard to dnsmasq and keepalived sub-processes of L3 agent).
The playbook can be run before the main upgrade like this:
openstack overcloud upgrade run --roles overcloud --playbook pre_upgrade_rolling_steps_playbook.yaml
Partial-Bug: #1738768
Change-Id: Icb830f8500bb80fd15036e88fcd314bf2c54445d
Implements: blueprint major-upgrade-workflow
In last step of FFU we need to swich repos before running upgrade.
We do so by introducing post FFU steps and running the switch in
them. We also update heat agents and os-collect-config on nodes.
Change-Id: I649afc6fa384ae21edc5bc917f8bb586350e5d47
Updating OpenStack (within release) means updating ODL from v1 to v1.1.
This is done by "openstack overcloud update" which collects
update_tasks. ODL needs 2 different steps to achieve this
minor update. These are called Level1 and Level2. L1 is
simple - stop ODL, update, start. This is taken care by paunch
and no separate implementation is needed. L2 has extra steps
which are implemented in update_tasks and post_update_tasks.
Updating ODL within the same major release (1->1.1) consists of either
L1 or L2 steps. These steps are decided from ODLUpdateLevel parameter
specified in environments/services-docker/update-odl.yaml.
Upgrading ODL to the next major release (1.1->2) requires
only the L2 steps. These are implemented as upgrade_tasks and
post_upgrade_tasks in https://review.openstack.org/489201.
Steps involved in level 2 update are
1. Block OVS instances to connect to ODL
2. Set ODL upgrade flag to True
3. Start ODL
4. Start Neutron re-sync and wait for it to finish
5. Delete OVS groups and ports
6. Stop OVS
7. Unblock OVS ports
8. Start OVS
9. Unset ODL upgrade flag
These steps are exactly same as upgrade_tasks.
The logic implemented is:
follow upgrade_tasks; when update_level == 2
Change-Id: Ie532800663dd24313a7350b5583a5080ddb796e7
As outlined in the spec, fast-forward upgrades aim to take an
environment from an initial release of N to a release of N>=2, beyond
that of the traditionally supported N+1 upgrade path provided today by
many OpenStack projects.
For TripleO the first phase of this upgrade will be to move the
environment to the release prior to the target release. This will be
achieved by disabling all OpenStack control plane services and then
preforming the minimum number of steps required to upgrade each service
through each release until finally reaching the target release.
This change introduces the framework for this phase of the fast-forward
upgrades by adding playbooks and task files as outputs to RoleConfig.
- fast_forward_upgrade_playbook.yaml
This is the top level play and acts as the outer loop of the process,
iterating through the required releases as set by the
FastForwardUpgradeReleases parameter for the fast-forward section of the
upgrade. This currently defaults to Ocata and Pike for Queens.
Note that this play is run against the overcloud host group and it is
currently assumed that the inventory used to run this play is provided
by the tripleo-ansible-inventory command.
- fast_forward_upgrade_release_tasks.yaml
This output simply imports the top level prep and bootstrap task files.
- fast_forward_upgrade_prep_tasks.yaml
- fast_forward_upgrade_bootstrap_tasks.yaml
These outputs act as the inner loop for the fast-forward upgrade phase,
iterating over step values while importing their associated role tasks.
As prep tasks are carried out first for each release we loop over step
values starting at 0 and ending at the defined
fast_forward_upgrade_prep_steps_max, currently 3.
Following this we then complete the bootstrap tasks for each release,
looping over steps values starting at
fast_forward_upgrade_prep_steps_max + 1 , currently 4 and ending at
fast_forward_upgrade_steps_max,currently 9.
- fast_forward_upgrade_prep_role_tasks.yaml
- fast_forward_upgrade_bootstrap_role_tasks.yaml
These outputs then finally import the fast_forward_upgrade_tasks files
generated by the FastForwardUpgradeTasks YAQL query for each role. For
prep tasks these are always included when on an Ansible host of a given
role. This differs from bootstrap tasks that are only included for the
first host associated with a given role.
This will result in the following order of task imports with their
associated value of release and step:
fast_forward_upgrade_playbook
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=ocata
\_fast_forward_upgrade_prep_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=3
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=ocata
\_fast_forward_upgrade_bootstrap_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=N
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=N
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=pike
\_fast_forward_upgrade_prep_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=3
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=pike
\_fast_forward_upgrade_bootstrap_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=N
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=N
bp fast-forward-upgrades
Change-Id: Ie2683fd7b81167abe724a7b9245bf85a0a87ad1d
This was lost in the translation to ansible, but it's needed to
enable existing interfaces such as hiera includes via *ExtraConfig.
For reference this problem was introduced in:
I674a4d9d2c77d1f6fbdb0996f6c9321848e32662 and this fix only considers
returning the behaviour prior to that patch (for the baremetal puppet
apply), further discussion is required on if/how this could be applied
to the new container architecture.
Change-Id: I0384edb23eed336b95ffe6293fe7d4248447e849
Partial-Bug: #1742663
This allows per-step ansible tasks to be run on the nodes when using
the config download mechanism, e.g adding the following to a service
template will create /tmp/testperstep populated for every step.
deploy_steps_tasks:
- name: Test something happens each step
lineinfile:
path: /tmp/testperstep
create: true
line: "{{step}} step happened"
Change-Id: Ic34f5c48736b6340a1cfcea614b05e33e2fce040
This adds another interface like external_deploy_tasks, but instead
of running on each deploy step, the tasks are run after the deploy
is completed, so it's useful for per-service bootstrapping such
as is under development for octavia in:
https://review.openstack.org/#/c/508195https://review.openstack.org/#/c/515402/
These reviews could potentially be reworked to use this interface,
which would avoid the issue where the configuration needs to happen
after all the openstack services are deployed and configured.
As an example, here is how you could create a temp file post deploy:
external_post_deploy_tasks:
- name: Test something happens post-deploy
copy:
dest: /tmp/debugpostdeploy
content: "done"
Change-Id: Iff3190a7d5a238c8647a4ac474821aeda5f2b1f8
The compute service list is polled until all expected hosts are reported or a
timeout occurs (600s).
Adds a cellv2_discovery flag to puppet services. Used to generate a list of
hosts that should have cellv2 host mappings.
Adds a canonical fqdn and that should match the fqdn reported by a host.
Adds the ability to upload a config script for docker config instead of using
complex bash on-liners.
Closes-bug: 1720821
Change-Id: I33e2f296526c957cb5f96dff19682a4e60c6a0f0
Services can define external_deploy_tasks, which are meant to be
executed on the undercloud node. They are step-based as the other
Ansible tasks we have, and they get executed during each deployment
step before the puppet and docker tasks.
These tasks can be used to perform complex actions from the
undercloud, such as executing nested installers like kubespray or
ceph-ansible. This should allow deploying overcloud with a single
Ansible playbook, and without creating Ansible->Mistral->Ansible loop.
Implements: blueprint ansible-config-download
Change-Id: I3dcafb96f5cea5fdcebe2b2012b61a38b0568834
Depends-On: I8491540edf78711f3229eabeda22a17cd55e99c8
Using the service_ prefix seems incoherent with its use in
service_config_settings (vs config_settings).
Change-Id: Ia39f181415bee0071409dabddfa0c5c312915e1f
This adds a new config/deployment per role that will come after any
post deploy steps. It drives the same ansible config as the
upgrade_tasks but instead collects the post_upgrade_tasks for any
service in the given role.
The workflow is upgrade_tasks, then post deploy steps (either
puppet/ or docker/ depending on the env) and then the
post_upgrade_tasks added here.
This is added to the pacemaker/cinder-volume.yaml service for now
see the bug below for more info
Change-Id: Iced34fecf02ebddc91df9302de54d2f4c2cab680
Closes-Bug: 1706951
These work the same way as upgrade_tasks *but* they use a step variable
instead of tags, so we can iterate over a count/sequence which isn't
possibly via a wrapper playbook with tags (we may want to align upgrade
tasks with the same approach if this works out well).
Note the tasks can be run via ansible-playbook on the undercloud, like:
openstack overcloud config download --config-dir tmpconfig
cd tmpconfig/tripleo-HCrDA6-config
ansible-playbook -b -i /usr/bin/tripleo-ansible-inventory update_steps_playbook.yaml --limit controller
The above will do a rolling update for the Controller role (note the inconsistent
capitalization, we probably need to fix the group naming in tripleo-ansible-inventory)
because we specify serial: 1 in the playbook.
You can also trigger an update explicitly on one node like this, which is useful for debugging:
ansible-playbook -vvv -b -i /usr/bin/tripleo-ansible-inventory update_steps_playbook.yaml --limit overcloud-controller-0
Change-Id: I20bb3e26ab9d9cadf1a31fd304de8a014a901aa9
The key_name default is ignored because the parameter is used in
some mutually exclusive environments where the default doesn't
need to be the same.
Change-Id: I77c1a1159fae38d03b0e59b80ae6bee491d734d7
Partial-Bug: 1700664
This makes the RolesData output more accurate, and we can rework
things so docker-puppet only gets run when there is a non-empty
file calculated (e.g there are tasks to run).
Change-Id: I8cdab3c857977c80fe2e359ab9e05740a838d66b
This stores the result of the yaql queries etc for easier debugging, and
also so there's no risk we constantly re-evaluate the expensive query
which can happen with some heat versions and configurations.
This also gives a nicer error when things go wrong as when a query fails
you know which resource had an error, and also the validation on resources
is currently stricter due to bug #1599114. We also get some additional
type validation from each OS::Heat::Value resource, e.g it checks if the
calculated value is a valid map or list.
The final advantage (and the original motivation for doing this) is that
we can easily filter null values for any outputs where this isn't already
done, which makes the config data written via openstack overcloud config
download cleaner.
Change-Id: Ia6697cf2e47f3f7b727d620536e0873a985c98c4
Moving these means we get a more accurate output from the overcloud
RoleData output, which more closely reflects what is actually
deployed.
Change-Id: I154f36c1597cf4abe29ca0bfe15a54f507433fb1
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
This new directory has now been added to the RDO packaging so we
can move things common to both puppet/container architecture here,
starting with the recently combined services.yaml
Change-Id: If2ce27188c4c15002b3ad830e8d6eb9504d2f3d2