This reverts commit 9cae59be51e8d2d798830042a5fd448a4aa5e7dc.
Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues.
Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc
Closes-Bug: #1906288
CentOS 8 should work fine without the workaround.
This change adds the missing CentOS 8 IPv6 CI job as well.
Change-Id: I58af7a09b5ae09a10b9efc33c1f30c2efc6613f7
Main plays are action-redirect-stubs, ideal for import_tasks.
This avoids 'include' penalty and makes logs/ara look nicer.
Fixes haproxy and rabbitmq not to check the host group as well.
Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0
Partially-Implements: blueprint performance-improvements
Config plays do not need to check containers. This avoids skipping
tasks during the genconfig action.
Ironic and Glance rolling upgrades are handled specially.
Swift and Bifrost do not use the handlers at all.
Partially-Implements: blueprint performance-improvements
Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
Since change [1] merged we have two mariadb images (mariadb and mariadb-server)
Let's use mariadb-server in kolla-ansible, so we can deprecate mariadb image.
[1]: https://review.opendev.org/#/c/710217/
Change-Id: I4ae2ccaaba8fb516f469f4ce8628e8c61de03f0d
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. For unconditionally included tasks, switching to
import_tasks provides a clear benefit.
Benchmarking of include vs. import is available at [1].
This change switches from include_tasks to import_tasks where there is
no condition applied to the include.
[1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/include-and-import.md#task-include-and-import
Partially-Implements: blueprint performance-improvements
Change-Id: Ia45af4a198e422773d9f009c7f7b2e32ce9e3b97
Previously we mounted /etc/timezone if the kolla_base_distro is debian
or ubuntu. This would fail prechecks if debian or ubuntu images were
deployed on CentOS. While this is not a supported combination, for
correctness we should fix the condition to reference the host OS rather
than the container OS, since that is where the /etc/timezone file is
located.
Change-Id: Ifc252ae793e6974356fcdca810b373f362d24ba5
Closes-Bug: #1882553
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the check-containers.yml include, the
included file only has a single task, so the overhead of skipping this
task will not be greater than the overhead of the task import. It
therefore makes sense to switch to use import_tasks there.
Partially-Implements: blueprint performance-improvements
Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
The common role was previously added as a dependency to all other roles.
It would set a fact after running on a host to avoid running twice. This
had the nice effect that deploying any service would automatically pull
in the common services for that host. When using tags, any services with
matching tags would also run the common role. This could be both
surprising and sometimes useful.
When using Ansible at large scale, there is a penalty associated with
executing a task against a large number of hosts, even if it is skipped.
The common role introduces some overhead, just in determining that it
has already run.
This change extracts the common role into a separate play, and removes
the dependency on it from all other roles. New groups have been added
for cron, fluentd, and kolla-toolbox, similar to other services. This
changes the behaviour in the following ways:
* The common role is now run for all hosts at the beginning, rather than
prior to their first enabled service
* Hosts must be in the necessary group for each of the common services
in order to have that service deployed. This is mostly to avoid
deploying on localhost or the deployment host
* If tags are specified for another service e.g. nova, the common role
will *not* automatically run for matching hosts. The common tag must
be specified explicitly
The last of these is probably the largest behaviour change. While it
would be possible to determine which hosts should automatically run the
common role, it would be quite complex, and would introduce some
overhead that would probably negate the benefit of splitting out the
common role.
Partially-Implements: blueprint performance-improvements
Change-Id: I6a4676bf6efeebc61383ec7a406db07c7a868b2a
Some services look for /etc/timezone on Debian/Ubuntu, so we should
introduce it to the containers.
In addition, added prechecks for /etc/localtime and /etc/timezone.
Closes-Bug: #1821592
Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
mariadb container name variable is fixed in some places,
but in the defaults directory, mariadb container_name variable
is variable. If the mariadb container_name variable is changed
during deployment, it will not be assigned to container_name,
but a fixed 'mariadb' name.
Change-Id: Ie8efa509953d5efa5c3073c9b550be051a7f4f9b
Both include_role and import_role expect role's name to be given
via "name" param instead of "role".
This worked but caused errors with ansible-lint.
See: https://review.opendev.org/694779
Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
We assume that all groups are present in the inventory, and quite obtuse
errors can result if any are not.
This change adds a precheck that checks for the presence of all expected
groups in the inventory for each service. It also introduces a common
service-precheck role that we can use for other common prechecks.
Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3
Partially-Implements: blueprint improve-prechecks
This fixes issues reported by Mark:
- possible failure with 4-node cluster (however unlikely)
- failure to stop all nodes from progressing when conditions are
not valid (due to: "any_errors_fatal: False")
Change-Id: Ib6995bf4c99202c9813859b3d9e2f420448f0445
These affected both deploy (and reconfigure) and upgrade
resulting in WSREP issues, failed deploys or need to
recover the cluster.
This patch makes sure k-a does not abruptly terminate
nodes to break cluster.
This is achieved by cleaner separation between stages
(bootstrap, restart current, deploy new) and 3 phases
for restarts (to keep the quorum).
Upgrade actions, which operate on a healthy cluster,
went to its section.
Service restart was refactored.
We no longer rely on the master/slave distinction as
all nodes are masters in Galera.
Closes-bug: #1857908
Closes-bug: #1859145
Change-Id: I83600c69141714fc412df0976f49019a857655f5
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).
To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.
Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
As part of the effort to implement Ansible code linting in CI
(using ansible-lint) - we need to implement recommendations from
ansible-lint output [1].
One of them is to stop using local_action in favor of delegate_to -
to increase readability and and match the style of typical ansible
tasks.
[1]: https://review.opendev.org/694779/
Partially implements: blueprint ansible-lint
Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
After performing a recovery of MariaDB, the mariadb containers are left
without a restart policy. This leaves them unable to recover from the
crash of a single galera node. There is another issue, in that the
'master' node is left in a bootstrap configuration, with the
--wsrep-new-cluster argument configured as BOOTSTRAP_ARGS.
This change fixes these issues by removing the restart policy of 'no'
from the 'slave' containers, and recreating the master container without
the restart policy or bootstrap arguments.
Change-Id: I36c875611931163ca2c29ae93b71d3af64cb197c
Closes-Bug: #1851594
We use the wsrep_notify.sh script to notify changes in Galera cluster
membership to haproxy. When xtrabackup was used for the state transfer,
nodes in the Donor state would be included in the backend pool. However,
since the switch to mariabackup in the Stein cycle, we now remove nodes
in the Donor state from the backend pool.
This change ensures that nodes in the Donor state are included in the
backend pool when the SST method is either xtrabackup or mariabackup.
https://galeracluster.com/library/documentation/mysql-wsrep-options.html#wsrep-notify-cmd
Change-Id: Ide4301779a0d221ae5d4dbdd4873fb8a40eb7297
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Closes-Bug: #1850945
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
This allows the install type for the project to be different than
kolla_install_type
This can be used to avoid hitting bug 1786238, since kuryr only supports
the source type.
Change-Id: I2b6fc85bac092b1614bccfd22bee48442c55dda4
Closes-Bug: #1786238
The MariaDB role HAProxy config section exposes MariaDB on the
mariadb_port which may not always be the same as database_port. The
HAProxy role checks that the database_port is free, and not the
mariadb_port. This could mean that the check passes, but the actual
port which HAProxy will attempt to use is taken.
This change configures HAProxy to talk to the MariaDB instances on
the mariadb_port, and maps them to the database_port which is used by
most services as part of the DB connection string.
There is a small risk that it may break someones override config.
Change-Id: I9507ee709cb21eb743112107770ed3170c61ef74
Explicitly wait for the database to be accessible via the load balancer.
Sometimes it can reject connections even when all database services are up,
possibly due to the health check polling in HAProxy.
Closes-Bug: #1840145
Change-Id: I7601bb710097a78f6b29bc4018c71f2c6283eef2
Docker has no restart policy named 'never'. It has 'no'.
This has bitten us already (see [1]) and might bite us again whenever
we want to change the restart policy to 'no'.
This patch makes our docker integration honor all valid restart policies
and only valid restart policies.
All relevant docker restart policy usages are patched as well.
I added some FIXMEs around which are relevant to kolla-ansible docker
integration. They are not fixed in here to not alter behavior.
[1] https://review.opendev.org/667363
Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
* Fix wsrep sequence number detection. Log message format is
'WSREP: Recovered position: <UUID>:<seqno>' but we were picking out
the UUID rather than the sequence number. This is as good as random.
* Add become: true to log file reading and removal since
I4a5ebcedaccb9261dbc958ec67e8077d7980e496 added become: true to the
'docker cp' command which creates it.
* Don't run handlers during recovery. If the config files change we
would end up restarting the cluster twice.
* Wait for wsrep recovery container completion (don't detach). This
avoids a potential race between wsrep recovery and the subsequent
'stop_container'.
* Finally, we now wait for the bootstrap host to report that it is in
an OPERATIONAL state. Without this we can see errors where the
MariaDB cluster is not ready when used by other services.
Change-Id: Iaf7862be1affab390f811fc485fd0eb6879fd583
Closes-Bug: #1834467
We don't add extra volumes support for all services in patch [1].
In order to unify the management of the volume, so we need add extra volumes
support for these services.
[1] 12ff28a693
Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255
Partially-Implements: blueprint support-extra-volumes
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
Since Ansible 2.5, the use of jinja tests as filters has been
deprecated.
I've run the script provided by the ansible team to 'fix' the
jinja filters to conform to the newer syntax.
This fixes the deprecation warnings.
Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd
Closes-bug: #1827370
Since we are now in the Train cycle, we can be sure that any running
MariaDB containers can be safely stopped, and we do not need to perform
an explicit shutdown prior to restarting them.
Change-Id: I5450690f1cbe0c995e8e4b01a76e90dac2574d61
Related-Bug: #1820325