In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
Followup on I91e5c1840ace8f567daf462c4eb3ec1f0c503823
When+run_once do not play nicely. [1]
The general workaround is to use include_tasks. [2]
However, it is very unlikely user wishes to run this role
without having any pacemaker nodes so the simplification that we
use throughout the Kolla Ansible code should be enough.
[1] https://github.com/ansible/ansible/issues/11496
[2] https://github.com/ansible/ansible/issues/11496#issuecomment-412936547
Change-Id: Ifaf64e3d9d89b2ec36a883fb7458556745b64802
Add file to the reno documentation build to show release notes for
stable/wallaby.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/wallaby.
Sem-Ver: feature
Change-Id: I34e6b2e1b9411e360994684f62414703f3bb2299
If docker_configure_for_zun is set to true, then Zun-specific
configuration for Docker is applied to all nodes. It should only be
applied based on the relevant inventory groups. In some cases this can
cause Docker to fail to start. See
https://storyboard.openstack.org/#!/story/2008544 for details.
This change applies the configuration based on the zun-compute and
zun-cni-daemon groups. It also modifies the expression to not assume
that these groups exist in the inventory.
Change-Id: I0141abf0dd83e3a567ea6dcca945f86db129becf
Closes-Bug: #1914378
Story: 2008544
Task: 41645
Co-Authored-By: Buddhika Sanjeewa <bsanjeewa@kln.ac.lk>
The current behaviour is to support supplying a single
folder of Grafana dashboards which can then be populated
into a single folder in Grafana. Some users may wish
to have sub-folders of Dashboards, and load these into
separate dashboard folders in Grafana via a custom
provisioning file. For example, a user may have a
sub-folder of Ceph dashboards that they wish to keep
separate from OpenStack dashboards. This patch supports
sub-folders whilst not affecting the original mechanism.
Trivial-Fix
Change-Id: I9cd289a1ea79f00cee4d2ef30cbb508ac73f9767
- Replace hardcoded haproxy monitor user with variable.
- Rename mariadb_backup variable to mariadb_backup_possible.
- Drop creation of monitor user in handlers as this is
now handled in register.yml for good reason.
Change-Id: I255a79d36ae18ca42d0befd00b235ca509197db3
This change enables the use of Docker healthchecks for rabbitmq services.
Implements: blueprint container-health-check
Depends-On: https://review.opendev.org/c/openstack/kolla/+/784562
Change-Id: I23a2c2efab858b9ed39c6ce0ec4a82df10e7f93d
An editable installation allows changes to be made to the source code
directly, and have those changes applied immediately without having to
reinstall.
pip install -e /path/to/kolla-ansible
Above is currently working only in virtualenv, but there is no reason to
not allow in all cases. This is usefull for example when user is
building his own docker container with editable kolla-ansible installed
from git without virtualenv.
Change-Id: I185f7c09c3f026fd6926a26001393f066ff1860d
It will allow us to fail fast when pulling the image
is a problem - instead of failing in the middle of
deployment.
Change-Id: I017cddcfbbc5449e63d807385216b94e74503c9b
Adds HAcluster Ansible role. This role contains High Availability
clustering solution composed of Corosync, Pacemaker and Pacemaker Remote.
HAcluster is added as a helper role for Masakari which requires it for
its host monitoring, allowing to provide HA to instances on a failed
compute host.
Kolla hacluster images merged in [1].
[1] https://review.opendev.org/#/c/668765/
Change-Id: I91e5c1840ace8f567daf462c4eb3ec1f0c503823
Implements: blueprint ansible-pacemaker-support
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Kolla-ansible is currently installing mariadb
cluster on hosts defined in group['mariadb']
and render haproxy configuration for this hosts.
This is not enough if user want to have several
service databases in several mariadb clusters (shards).
Spread service databases to multiple clusters (shards)
is usefull especially for databases with high load
(neutron,nova).
How it works ?
It works exactly same as now, but group reference 'mariadb'
is now used as group where all mariadb clusters (shards)
are located, and mariadb clusters are installed to
dynamic groups created by group_by and host variable
'mariadb_shard_id'.
It also adding special user 'shard_X' which will be used
for creating users and databases, but only if haproxy
is not used as load-balance solution.
This patch will not affect user which has all databases
on same db cluster on hosts in group 'mariadb', host
variable 'mariadb_shard_id' is set to 0 if not defined.
Mariadb's task in loadbalancer.yml (haproxy) is configuring
mariadb default shard hosts as haproxy backends. If mariadb
role is used to install several clusters (shards), only
default one is loadbalanced via haproxy.
Mariadb's backup is working only for default shard (cluster)
when using haproxy as mariadb loadbalancer, if proxysql
is used, all shards are backuped.
After this patch will be merged, there will be way for proxysql
patches which will implement L7 SQL balancing based on
users and schemas.
Example of inventory:
[mariadb]
server1
server2
server3 mariadb_shard_id=1
server4 mariadb_shard_id=1
server5 mariadb_shard_id=2
server6 mariadb_shard_id=3
Extra:
wait_for_loadbalancer is removed instead of modified as its role
is served by check already. The relevant refactor is applied as
well.
Change-Id: I933067f22ecabc03247ea42baf04f19100dffd08
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
* Don't generate masakari.conf for instance monitor
* Don't generate masakari-monitors.conf for API or engine
* Use a consistent name for dimensions -
masakari_instancemonitor_dimensions
* Fix source code paths in dev mode
Change-Id: I551f93c9bf1ad6712b53c316074ae1df84e4352b
Often cephadm jobs fail with:
Mar 30 13:01:21 primary bash[75459]: debug 2021-03-30T13:01:21.844+0000 7fa30431f700 -1 error: monitor data filesystem reached concerning levels of available storage space (available: 4% 1.8 GiB)
Let's check if 5G OSD helps and also print df -h output for reference
Change-Id: I6960fd0f378aea5a14a73d9228edf86fb86cac6c
We can't check this with timedatectl as it is not aware
of any "non-native" NTP daemon.
This could be a warning-level message but we don't have
such messages from the prechecks.
Closes-Bug: #1922721
Change-Id: I6db37576118cf5cff4ba7a63e179f0ab37467d22