With the new default since Wallaby, starting Docker makes it
enable forwarding and not filter it at all.
This may pose a security risk and should be mitigated.
Closes-Bug: #1931615
Change-Id: I5129136c066489fdfaa4d93741c22e5010b7e89d
The host list order seen during Ansible handlers may differ to the usual
play host list order, due to race conditions in notifying handlers. This
means that restart_services.yml for RabbitMQ may be included in a
different order than the rabbitmq group, resulting in a node other than
the 'first' being restarted first. This can cause some nodes to fail to
join the cluster. The include_tasks loop was introduced in [1].
This change fixes the issue by splitting the handler into two tasks, and
restarting the first node before all others.
[1] https://review.opendev.org/c/openstack/kolla-ansible/+/763137
Change-Id: I1823301d5889589bfd48326ed7de03c6061ea5ba
Closes-Bug: #1930293
Since I0474324b60a5f792ef5210ab336639edf7a8cd9e swift role uses the new
service-cert-copy role introduced in the
I6351147ddaff8b2ae629179a9bc3bae2ebac9519 but the swift role itself
doesn't contain the handler used in the service-cert-copy. Right now,
restarting the swift container isn't necessary, but the handler should
exist. Also we should fix the name of the service used.
Closes-Bug: #1931097
Change-Id: I2d0615ce6914e1f875a2647c8a95b86dd17eeb22
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
On machines with many cores, we were seeing excessive CPU load on systems
that were not very busy. With the following Erlang VM argument we saw
RabbitMQ CPU usage drop from about 150% to around 20%, on a system with
40 hyperthreads.
+S 2:2
By default RabbitMQ starts N schedulers where N is the number of CPU
cores, including hyper-threaded cores. This is fine when you assume all
your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical
Kolla Ansible setup. Here we go for two scheduler threads.
More details can be found here:
https://www.rabbitmq.com/runtime.html#scheduling
and here:
https://erlang.org/doc/man/erl.html#emulator-flags
+sbwt none
This stops busy waiting of the scheduler, for more details see:
https://www.rabbitmq.com/runtime.html#busy-waiting
Newer versions of rabbit may need additional flags:
"+sbwt none +sbwtdcpu none +sbwtdio none"
But this patch should be back portable to older versions of RabbitMQ
used in Train and Stein.
Note that information on this tuning was found by looking at data from:
rabbitmq-diagnostics runtime_thread_stats
More details on that can be found here:
https://www.rabbitmq.com/runtime.html#thread-stats
Related-Bug: #1846467
Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
Interface names with dashes can cause problems in Ansible since dashes
are replaced with underscores when referencing facts. In the baremetal
role we reference the fact for api_interface without replacing dashes
with underscores. This may result in host entries being omitted from
/etc/hosts.
This change fixes the issue.
Change-Id: I667adc7d8a7dbd20dbfa293f389e02355f8275bb
Related-Bug: #1927357
The chrony container is deprecated in Wallaby, and disabled by default.
This change allows to remove the container if chrony is disabled.
Change-Id: I1c4436072c2d47a95625e64b731edb473384b395
This is required to support Debian Bullseye (11) - need to set
nova-libvirt to use 'host' CgroupnsMode.
Change-Id: I40213d4092fa325bcf37bb1fb4437ab125fe328b
The mariadb image was removed in Wallaby, leading to database backup
failures.
Change-Id: I90986e7521779997df2782767bb95efcbd8ef232
Closes-Bug: #1928129
docker-ce on Debian/Ubuntu gets started just after installation, before
baremetal role configures daemon.json - which results in iptables rules
being implemented - but not removed on docker engine restart.
Closes-Bug: #1923203
Change-Id: Ib1faa092e0b8f0668d1752490a34d0c2165d58d2
This task is writing private key from passwords to
/etc/kolla/octavia-worker/{{ octavia_amp_ssh_key_name }} even
if user disabled octavia auto configure.
This patch is adding conditional for this task and skipping
it if octavia_auto_configure: "no".
Closes-Bug: #1927727
Change-Id: Ib993b387d681921d804f654bea780a1481b2b0d0
In order for DVR to work on VLAN tenant networks we need to configure
external_ids:ovn-chassis-mac-mappings with per node generated MAC [1]
on computes [1].
[1]: 1fed74cfc1
Co-Authored-By: Bartosz Bezak <bartosz@stackhpc.com>
Depends-On: https://review.opendev.org/c/openstack/neutron/+/782250
Change-Id: I3a3ccde5b9ef2afb4c3e9206f13827687880cb57
In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
Add file to the reno documentation build to show release notes for
stable/wallaby.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/wallaby.
Sem-Ver: feature
Change-Id: I34e6b2e1b9411e360994684f62414703f3bb2299
If docker_configure_for_zun is set to true, then Zun-specific
configuration for Docker is applied to all nodes. It should only be
applied based on the relevant inventory groups. In some cases this can
cause Docker to fail to start. See
https://storyboard.openstack.org/#!/story/2008544 for details.
This change applies the configuration based on the zun-compute and
zun-cni-daemon groups. It also modifies the expression to not assume
that these groups exist in the inventory.
Change-Id: I0141abf0dd83e3a567ea6dcca945f86db129becf
Closes-Bug: #1914378
Story: 2008544
Task: 41645
Co-Authored-By: Buddhika Sanjeewa <bsanjeewa@kln.ac.lk>