If kolla-ansible is installed via pip install --user, currently the
kolla-ansible script is unable to locate the installed playbooks.
This leads to a failure when running commands.
This change fixes the issue by checking for the user's .local directory
as a possible installation path.
This fixes some of the scenario tests which were failing after switching
to a user installation in Ifaf1948ed5d42eebaa62d7bad375bbfc12b134d5.
Most tests did not fail since the kolla-ansible script in the source
checkout was used.
Closes-Bug: #1915527
Change-Id: I5b47a146627d06bb3fe4a747c5f20290c726b0f9
There are a few issues fixed here:
- The Barbican API service doesn't set a log file, so all the Barbican API
service logs go to loadwsgi.py.log by default.
- The logs in loadwsgi.py.log are not ingested properly by Fluentd.
- uWSGI logs go to barbican-api.log. This would normally be used as the log
file for the Barbican API service logs.
This patch makes the following changes to address the above issues:
- All uWSGI logs (from the Emperor and Vassals) go to barbican_api_uwsgi_access.log
Although these logs aren't strictly all access logs, this follows the existing
pattern for WSGI logs.
- The Barbican API service logs are written to barbican-api.log instead of
loadwsgi.py.log. This follows the pattern used by other OpenStack services.
- Fluentd is configured to parse the Barbican API service logs as it would with
other OpenStack Python services.
Change-Id: I6d03fa8c81c52b6f061514a836bbd15bb6639aaf
Closes-Bug: #1891343
It is now possible to deploy either 1.x or 2.x version of Prometheus.
The new 2.x version introduces breaking changes in terms of storage
format and command line options.
Change-Id: I80cc6f1947f3740ef04b29839bfa655b14fae146
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
With this patch, Monasca no longer relies on automatic topic creation
in Kafka, and instead pre-creates all topics before bringing up the
containers. If the topic already exists then it will not be
changed, therefore existing users are not affected.
This patch allows per topic customisations, such as increasing the
number of partitions on particular topics and also works around
a race condition in automatic topic creation where multiple instances
of the same service could race to create a topic causing some of the
services to restart and throw an error before resuming normal
operation.
Change-Id: Ib15c95bb72cf79e9e55945d757b248e06f5f4065
The bootstrap process tries to removes existing apparmor profiles but
doesn't consider the case where those are disabled. This change fixes
the scenario where the libvirt profile exists but is disabled.
Closes-Bug: 1909874
Change-Id: Ied0f2acc420bd5cf1e092c8aee358cba35bd8d5d
This change enables the use of Docker healthchecks for cloudkitty
services.
Implements: blueprint container-health-check
Change-Id: I19892035382ffff5200e88da53408a19e72c9d68
This change fix ansible deploy ovs-dpdk failed and
neutron_openvswitch_agent container can't start..
dpdk_tunnel is a role variable, but kolla_address gets vaule
from hostvars. so we need remove this variable and it's friends
to group/all.yaml
neutron_openvswitch_agent connects to ovs-db with 127.0.0.1,
but ovs-db listen on management interface.
Closes-Bug: 1908850
Change-Id: I86a13d2476644bfa2545a6737752cda1ade34d23
This can improve performance of image format conversion and encryption, if
sufficient memory is available on the cinder-volume host.
Closes-Bug: #1897276
Change-Id: I4ca1c4db7b66fdfc6bb873aad2570234f3882d81
Mariadb recovery fails if a cluster has previously been deployed, but any of
the mariadb containers do not exist.
Steps to reproduce
==================
* Deploy a mariadb galera cluster
* Remove the mariadb container from at least one host (docker rm -f mariadb)
* Run kolla-ansible mariadb_recovery
Expected results
================
The cluster is recovered, and a new container deployed where necessary.
Actual results
==============
The task 'Stop MariaDB containers' fails on any host where the container does
not exist.
Solution
========
This change fixes the issue by using the 'ignore_missing' flag for kolla_docker
with the stop_container action. This means the task does not fail when the
container does not exist. It is also necessary to swap some 'docker cp'
commands for 'cp' on the host, using the path to the volume.
Closes-Bug: #1907658
Change-Id: Ibd4a6adeb8443e12c45cbab65f501392ffb16fc7
The 'prechecks : Checking Docker version' task previously failed with
Docker 20.10.0. The regex used to parse the version was returning
0.10.0, which is not above the minimum. The previous version of 19.x
would have been parsed as 9.x, which is above the minimum.
This change fixes the issue by matching the beginning and end of the
version using \b.
Depends-On: https://review.opendev.org/766183
Change-Id: I2a23eea7effb5b9a5e73361bcd48bd2e16d1569c
Closes-Bug: 1907436
Those loglevels can build up over time and create unnecessary high metrics cardinality.
Change-Id: Ib1a03772d0bd58758430b37b4f2f67126cf86fa3
Closes-bug: #1906796
Add scrape_timeout option in
prometheus_openstack_exporter job in order
to avoid timeout for large Openstack environment.
Change-Id: If96034e602bee3b3eea34a2656047355e1d17eec
Closes-Bug: #1903547
Currently we set enable-chassis-as-gw on compute nodes when distributed FIP
is enabled - that is not required for FIP functionality.
Change-Id: Ic880a9479fa0cdbb1d1cae3dbe9523ef2e1132ce
Closes-Bug: #1901960
Add file to the reno documentation build to show release notes for
stable/victoria.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/victoria.
Change-Id: Iad61fa88f8afa7d5f39154b9466338b417bbf40a
Sem-Ver: feature
Follows existing backends patterns to add support for the GlusterFS
NFS driver.
NFS server type used by the GlusterFS backend, Gluster or Ganesha,
currently supports Gluster.
The GlusterFS NFS driver needs to install the glusterfs-fuse package
in the kolla images manila share container in advance, which has been merged
in https://review.opendev.org/747510
Change-Id: I7fdb121b5bf9850d62246a24f9b17d226028c2ca
During a deploy, if keystone Fernet key rotation happens before the
keystone container starts, the rotation may fail with 'permission
denied'. This happens because config.json for Keystone sets the
permissions for /etc/keystone/fernet-keys.
This change fixes the issue by also setting the permissions for
/etc/keystone/fernet-keys in config.json for keystone-fernet and
keystone-ssh.
Change-Id: I561e4171d14dcaad8a2a9a36ccab84a670daa904
Closes-Bug: #1888512
Currently we check the age of the primary Fernet key on Keystone
startup, and fail if it is older than the rotation interval. While this
may seem sensible, there are various reasons why the key may be older
than this:
* if the rotation interval is not a factor of the number of seconds in a
week, the rotation schedule will be lumpy, with the last rotation
being up to twice the nominal rotation interval
* if a keystone host is unavailable at its scheduled rotation time,
rotation will not happen. This may happen multiple times
We could do several things to avoid this issue:
1. remove the check on the age of the key
2. multiply the rotation interval by some factor to determine the
allowed key age
This change goes for the more simple option 1. It also cleans up some
terminology in the keystone-startup.sh script.
Closes-Bug: #1895723
Change-Id: I2c35f59ae9449cb1646e402e0a9f28ad61f918a8