Both include_role and import_role expect role's name to be given
via "name" param instead of "role".
This worked but caused errors with ansible-lint.
See: https://review.opendev.org/694779
Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
We assume that all groups are present in the inventory, and quite obtuse
errors can result if any are not.
This change adds a precheck that checks for the presence of all expected
groups in the inventory for each service. It also introduces a common
service-precheck role that we can use for other common prechecks.
Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3
Partially-Implements: blueprint improve-prechecks
Make it require uniqueness of resolution as well to avoid later
issues with RabbitMQ going crazy.
Change-Id: I000ba6c62ab44eac0abdf8d5d1f069adfbc6552f
Closes-bug: #1863363
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
Docker has no restart policy named 'never'. It has 'no'.
This has bitten us already (see [1]) and might bite us again whenever
we want to change the restart policy to 'no'.
This patch makes our docker integration honor all valid restart policies
and only valid restart policies.
All relevant docker restart policy usages are patched as well.
I added some FIXMEs around which are relevant to kolla-ansible docker
integration. They are not fixed in here to not alter behavior.
[1] https://review.opendev.org/667363
Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
Since Ansible 2.5, the use of jinja tests as filters has been
deprecated.
I've run the script provided by the ansible team to 'fix' the
jinja filters to conform to the newer syntax.
This fixes the deprecation warnings.
Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd
Closes-bug: #1827370
Several config file permissions are incorrect on the host. In general,
files should be 0660, and directories and executables 0770.
Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
Closes-Bug: #1821579
Since Id724b44a3edd951fa8b06c9f2c347e9ed8c5ffd9, there is a reference to a
non-existent variable, rabbitmq_confs, that causes deployment to fail if
rabbitmq configuration other than config.json is changed.
I'm taking this opportunity to simplify the role, since we can use the Ansible
handler notification system to determine when handlers need to run, without
registering and checking variables. This simpler approach was used in the
haproxy refactor.
Change-Id: Ibe0e7fda93afff741243ff9c350db1c8c6e1e6d3
Closes-Bug: #1816053
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
In order to migrate to the latest release of rabbitmq (3.7), we need to
first remove this deprecated plugin which is no longer supported (the
problems it solved are now addressed in rabbitmq itself).
This avoids a circular dependency in CI where the new images depend on
the new clustering and the new clustering depends on the new images.
Change-Id: I921459f3e40b9e0d4af9497384e49aabf0abe79b
This commit is the final commit to apply resource-constraints
to all OpenStack services.
Depends-on: I39004f54281f97d53dfa4b1dbcf248650ad6f186
Change-Id: I072d69be9698be54775cb0ae286ea2b6ed78776c
Implements: blueprint resource-constraints
Add become to all tasks that use the module "kolla_docker"
Change-Id: I4309c4011687b88ec31d739fd8f834fe2326ff10
Partial-Implements: blueprint ansible-specific-task-become
Using rabbitmq service defined in default when
boot rabbitmq_bootstrap.
Not a bug here, just an enhancement.
Change-Id: I79f0f7efe3308ed4eb898b85a6370be1bd637d9a
- rename action and serial to kolla_ansible and kolla_serial
- use become instead of "sudo <command>" in shell
- Remove quota for failed_when and changed_when in rabbitmq tasks
Change-Id: I78cb60168aaa40bb6439198283546b7faf33917c
Implements: blueprint migrate-to-ansible-2-2-0
This patchset implements yamllint test to all *.yml
files.
Also fixes syntax errors to make jobs to pass.
Change-Id: I3186adf9835b4d0cada272d156b17d1bc9c2b799
Add config_owner_user and config_owner_group to group_vars/all,
which is user and group of Kolla configuration files in /etc/kolla.
Add become to post-deploy playbook.
Add become to only neccesary tasks in roles:
- certificate
- common
- destroy
- haproxy
- mariadb
- memcached
- rabbitmq
Change-Id: I2aba745a6e3928c52642f64551470fd08cbfd058
Partial-Implements: blueprint ansible-specific-task-become
Copy the patterns from the rabbit checks, skip some pre-checks when the
container has already been started. Without this change the pre-checks
fail when you re-run the deploy, i.e. the port is not free because
rabbit is already running on that port.
This bug was triggered because murano is enabled, and this change has
been added to add the extra rabbitmq instance by default:
d8fe3ea780c188b6e937ab6f08a8475d2330a9fa
Closes-Bug: #1715135
Change-Id: I0eb8785e7cd4eadfa792ea14a27f54a891b2bf02
Upgrade fails as outward_rabbitmq does not exist
and cannot therefore be upgraded. Omit it from the
upgrade check and bootstrap it after rabbitmq upgrade.
Remove jinja2 from 'Find gospel node' task; removes warnings.
Change-Id: I3766271c62779c8dbd31e7cf2300473815bbbe68
Certain services such as Murano and trove require access to a rabbitmq
instance from tenant networks. [0]
Exposing the internal rabbitmq to end users is a security hole, hence
there are two options, 1) use vhosts in the existing rabbitmq, or two a
separate rabbitmq instances. Given the importance of rabbitmq to the
OpenStack deployment, we have decided to go with a separate instance.
Refer to [1] for more detail on the various options.
This change makes the rabbitmq role generic so that it can be reused, in
this case to start 'outward_rabbitmq'. It needs to be exposed via
haproxy both for network isolation and also because this is what Murano
configuration requires.
Follow on patches will be added to add a vhost in this outward instance
for Murano and other services which require access.
Based on the original work by bdaca[2]
[0] http://murano.readthedocs.io/en/stable-liberty/intro/architecture.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109091.html
[2] https://review.openstack.org/#/c/374525
Change-Id: Ib2bcc7ed4bf4f883a7cd1dfad3db89201e3cfd8d
Partial-Bug: #1620374
Depends-On: I020eb6219f89a310451becde41f6f1c7f54baadd
Co-Authored-By: Bartłomiej Daca <bartek.daca@gmail.com>
In Ansible 2.3.0 when statements should not include jinja2 templating
delimiters such as {{ }} or {% %}, and gate is broken with Ansible 2.3.1.
This patchset rewrite when statement in rabbitmq precheck task to not use
string interpolation.
Change-Id: Ie2f1666cc8ced7cf20ceba40c7c7aaec750778f9
Closes-Bug: #1695111
wait_for module waits 300 seconds for the port started or stopped. This
is meaningless and useless in precheck. This patch change timeout to 1
seconds.
Change-Id: I9b251ec4ba17ce446655917e8ef5e152ef947298
Closes-Bug: #1688152
Add a new subcommand 'check' to kolla-ansible, used to run the
smoke/sanity checks.
Add stub files to all services that don't currently have checks.
Change-Id: I9f661c5fc51fd5b9b266f23f6c524884613dee48
Partially-implements: blueprint sanity-check-container
do_reconfigure.yml is introduced to use serial directive. But we use
it in wrong. Now serial has moved to playbook file. So it is time to
remove the do_reconfigure.yml file
Closes-Bug: #1628152
Change-Id: I8d42d27e6bc302a0e575b0353956eaef9b2ca9fd
Useful for upgrade etc., which is preferablly done serially.
Example usage: tools/kolla-ansible deploy OR tools/kolla-ansible upgrade
Closes-Bug: #1576708
DocImpact
Change-Id: I34b2e16f8ce53e472a4682a4738c4ac0f5abf00c
rabbitmq's start task contains a precheck. This should be part of the
other prechecks for consistency
TrivialFix
Change-Id: I7728ec3f5be3248424d74a4387925b72114b8943
Trying to use ConfigMap's in Kubernetes leads to an interesting
problem. We use the file name as the key and the contents of the
file as the text value. The ConfigMap is mounted on the container
as a volume and the key is then used as the name of the file. The
problem is that kubernetes has a limitation on the name of the
key
https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/identifiers.md
Which means we cannot use '_' in the name of the file.
Closes-Bug: #1581162
Change-Id: I2d9ec80f989c30893b019954fe18b3623d27a076
This fix adds a check of Rabbitmq's image version during the upgrade.
The container gets restarted only when the image version is different.
Change-Id: Ie038845c0c8fff1ac51b7cbf21e1b593229c2c0e
Closes-Bug: #1558832
On AIO installation we cannot assume that the public IP address
will be the first entry in "getent ahostsv4" result, because
it may be also a localhost address. To make this check positive
in AIO, we should look for the public IP in the whole output.
Change-Id: I1da7b95d7f00c7f87ff68ead46bf55fdea812599
Closes-Bug: 1564564