Moving to non-voting until the gate is green
RDO-Cloud is down and we can not fix the
dist-git.
Related-Bug: #1777168
Change-Id: Ia2c18ff554dc8b980528f1905bbad98dced2c336
RhsmVars should be used as the value to be replaced for the global
values, instead of using 'vars'.
Closes-Bug: #1776597
Change-Id: I480b3c51787547b9dd4e1401363a5da7c40798a8
This can be used to activate net-config-noop.yaml and
disable os-net-config on all roles. Useful if you are using
deployed servers and want to pre-configure your networks.
Change-Id: I80e5fb586f6de0bccd6245237d23712310c78588
Add cleanup tasks for Ironic, Keystone Mistral and Zaqar, so when
upgrading an undercloud to be containerized, an operator can also
cleanup these services rpms.
Depends-On: I2af99d8bad58f12bd895b473ecb84e4f2091f738
Change-Id: I7e257cece9fa3bdd9f2d1be08ccdf5c681213149
We've hit an issue in multinode CI that Ansible wasn't getting updated
during `upgrade prepare`. UpgradeInitCommonCommand wasn't being
executed because it was left out from deployed-server.yaml.
Change-Id: I940c3e05944829a6a4155722181e5fa85a963660
Closes-Bug: #1776474
This is the follow up patch for change
Ie4fe217bd119b638f42c682d21572547f02f17b2 which allows
configuring NFS backend for Nova.
To provide enhanced security improvement for migration, this
change enables TUNNELLED mode for migration, in case of
NFS shared storage.
Change-Id: Id0cfc945814e6aa5a5c85643514cf206f42e50f4
Implements: bp tripleo-nova-nfs
Updates the format of the CephX keys caps to a new one which
does not need backward compatibility in ceph-ansible
Change-Id: Icd36ac32ec0ed708e66fe638bcbf54cee2d1ae69
By setting the value of rule_name explicitly, we prevent backward
incompatibility issues because the default which ceph-ansible uses
might fit a particular version of Ceph, not all.
Change-Id: I275c1ca53ea79eea607cbbb58aa21cae6d6be80b
Closes-Bug: 1776252
We should re-run host_prep_tasks as part of the minor update, to make
sure the host is ready for starting the updated containers. The right
place for them is between update tasks and deployment tasks.
This is important in case we deliver changes to host_prep_tasks during
minor update, or if update_tasks do something that would partially
undo the host preparation, e.g. clear/delete some directories on the
host to get rid of previous state.
Change-Id: Ic0a905a8c4691cbe75113131bd84e8a39dea046d
Related-Bug: #1776206
When a role is defined but this role has a host count of 0, the ansible
tasks to generate the openshift inventory for the service would fail
with an undefined variable error.
Setting the value for non existent groups to empty array should get us
past the error.
Change-Id: Ib42708c095d28827f5decdb885ceb4f2a67b3a8b
Also remove OS::Heat::None mappings for resources that are not part of
the deployed roles.
Depends-On: I85c4390519ace0149895285225f5a4ece453f1f8
Change-Id: I55e8b25a4fb0b4839be5d741acdceec5dad903ad
The grep regexp can match several lines if the haproxy pattern
is present.
By matching only the started by a whitespace it will match
the haproxy container listed by docker ps:
[...] Up 17 hours neutron-haproxy-qrouter
[...] Up 20 hours haproxy-bundle-docker-
Change-Id: Id63991e862ab10170c8afbde7a11677cc3d2e2f6
In the same spirit as change I1f07272499b419079466cf9f395fb04a082099bd
we want to rerun all pacemaker _init_bundles all the time. For a few main
reasons:
1) We will eventually support scaling-up roles that contain
pacemaker-managed services and we need to rerun _init_bundles so that
pacemaker properties are created for the newly added nodes.
2) When you replace a controller the pacemaker properties will be
recreated for the newly added node.
3) We need to create appropriate iptables rules whenever we add a
service to an existing deployment.
We do this by adding the DeployIdentifier to the environment so that
paunch will retrigger a run at every redeploy.
Partial-Bug: #1775196
Change-Id: Ifd48d74507609fc7f4abc269b61b2868bfbc9272
OpenDaylight creates multiple files the first time it boots, which we do
not mount to the host. After the first boot, it creates a cache which we
do mount to the host. This means that on a config change or
update/upgrade of ODL the cache will not be removed, but the files will
be. This causes ODL to fail to start.
The solution is to stop the container in update/upgrade and then remove
the cache before the update happens. This will trigger the new ODL to
rebuild the cache with the new ODL version. For config change, we also
need to remove the cache in the host_prep_tasks so that we do not end up
in a similar state.
Closes-Bug: 1775919
Change-Id: Ia457b90b765617822e9adbf07485c9ea1fe179e5
Signed-off-by: Tim Rozet <trozet@redhat.com>
During the containerization work we regressed on the restart of
pacemaker resources when a config change for the service was detected.
In baremetal we used to do the following:
1) If a puppet config change was detect we'd touch a file with the
service name under /var/lib/tripleo/pacemaker-restarts/<service>
2) A post deployment bash script (extraconfig/tasks/pacemaker_resource_restart.sh)
would test for the service file's existence and restart the pcs service via
'pcs resource restart --wait=600 service' on the bootstrap node.
With this patchset we make use of paunch's ability do detect if a config
hash change happened to respawn a temporary container (called
<service>_restart_bundle) which will simply always restart the pacemaker
service from the bootstrap node whenever invoked, but only if the pcmk
resource already exists. For this reason we add config_volume and bind
mount it inside the container, so that the TRIPLEO_CONFIG_HASH env
variable gets generated for these *_restart_bundle containers.
We tested this change as follows:
A) Deployed an HA overcloud with this change and observed that pcmk resources
were not restarted needlessly during initial deploy
B) Rerun the exact same overcloud deploy with no changes, observed that
no spurious restarts would take place
C) Added an env file to trigger the of config of haproxy[1], redeployed and observed that it restarted
haproxy only:
Jun 06 16:22:37 overcloud-controller-0 dockerd-current[15272]: haproxy-bundle restart invoked
D) Added a trigger [2] for mysql config change, redeployed and observed restart:
Jun 06 16:40:52 overcloud-controller-0 dockerd-current[15272]: galera-bundle restart invoked
E) Added a trigger [3] for a rabbitmq config change, redeployed and observed restart:
Jun 06 17:03:41 overcloud-controller-0 dockerd-current[15272]: rabbitmq-bundle restart invoked
F) Added a trigger [4] for a redis config change, redeployed and observed restart:
Jun 07 08:42:54 overcloud-controller-0 dockerd-current[15272]: redis-bundle restart invoked
G) Rerun a deploy with no changes and observed that no spurious restarts
were triggered
[1] haproxy config change trigger:
parameter_defaults:
ExtraConfig:
tripleo::haproxy::haproxy_globals_override:
'maxconn': 1111
[2] mysql config change trigger:
parameter_defaults:
ExtraConfig:
mysql_max_connections: 1111
[3] rabbitmq config change trigger (default partition handling is 'ignore'):
parameter_defaults:
ExtraConfig:
rabbitmq_config_variables:
cluster_partition_handling: 'pause_minority'
queue_master_locator: '<<"min-masters">>'
loopback_users: '[]'
[4] redis config change trigger:
parameter_defaults:
ExtraConfig:
redis::tcp_backlog: 666
redis::params::tcp_backlog: 666
Change-Id: I62870c055097569ceab2ff67cf0fe63122277c5b
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Closes-Bug: #1775196
To not to redefine variable multiple times in each service we
run check only once and we set fact. To increase readability of
generated playbook we add block per strep in services.
Change-Id: I2399a72709d240f84e3463c5c3b56942462d1e5c
There was a typo in the update_tasks for Manila which was causing
updates and upgrades to fail. This patch fixes the typo.
Closes-Bug: 1775667
Change-Id: I88dd16fa94111a4eb56aeaa32b560cf7d12b9f82