- Replace hardcoded haproxy monitor user with variable.
- Rename mariadb_backup variable to mariadb_backup_possible.
- Drop creation of monitor user in handlers as this is
now handled in register.yml for good reason.
Change-Id: I255a79d36ae18ca42d0befd00b235ca509197db3
This change enables the use of Docker healthchecks for rabbitmq services.
Implements: blueprint container-health-check
Depends-On: https://review.opendev.org/c/openstack/kolla/+/784562
Change-Id: I23a2c2efab858b9ed39c6ce0ec4a82df10e7f93d
An editable installation allows changes to be made to the source code
directly, and have those changes applied immediately without having to
reinstall.
pip install -e /path/to/kolla-ansible
Above is currently working only in virtualenv, but there is no reason to
not allow in all cases. This is usefull for example when user is
building his own docker container with editable kolla-ansible installed
from git without virtualenv.
Change-Id: I185f7c09c3f026fd6926a26001393f066ff1860d
It will allow us to fail fast when pulling the image
is a problem - instead of failing in the middle of
deployment.
Change-Id: I017cddcfbbc5449e63d807385216b94e74503c9b
Adds HAcluster Ansible role. This role contains High Availability
clustering solution composed of Corosync, Pacemaker and Pacemaker Remote.
HAcluster is added as a helper role for Masakari which requires it for
its host monitoring, allowing to provide HA to instances on a failed
compute host.
Kolla hacluster images merged in [1].
[1] https://review.opendev.org/#/c/668765/
Change-Id: I91e5c1840ace8f567daf462c4eb3ec1f0c503823
Implements: blueprint ansible-pacemaker-support
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Kolla-ansible is currently installing mariadb
cluster on hosts defined in group['mariadb']
and render haproxy configuration for this hosts.
This is not enough if user want to have several
service databases in several mariadb clusters (shards).
Spread service databases to multiple clusters (shards)
is usefull especially for databases with high load
(neutron,nova).
How it works ?
It works exactly same as now, but group reference 'mariadb'
is now used as group where all mariadb clusters (shards)
are located, and mariadb clusters are installed to
dynamic groups created by group_by and host variable
'mariadb_shard_id'.
It also adding special user 'shard_X' which will be used
for creating users and databases, but only if haproxy
is not used as load-balance solution.
This patch will not affect user which has all databases
on same db cluster on hosts in group 'mariadb', host
variable 'mariadb_shard_id' is set to 0 if not defined.
Mariadb's task in loadbalancer.yml (haproxy) is configuring
mariadb default shard hosts as haproxy backends. If mariadb
role is used to install several clusters (shards), only
default one is loadbalanced via haproxy.
Mariadb's backup is working only for default shard (cluster)
when using haproxy as mariadb loadbalancer, if proxysql
is used, all shards are backuped.
After this patch will be merged, there will be way for proxysql
patches which will implement L7 SQL balancing based on
users and schemas.
Example of inventory:
[mariadb]
server1
server2
server3 mariadb_shard_id=1
server4 mariadb_shard_id=1
server5 mariadb_shard_id=2
server6 mariadb_shard_id=3
Extra:
wait_for_loadbalancer is removed instead of modified as its role
is served by check already. The relevant refactor is applied as
well.
Change-Id: I933067f22ecabc03247ea42baf04f19100dffd08
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
* Don't generate masakari.conf for instance monitor
* Don't generate masakari-monitors.conf for API or engine
* Use a consistent name for dimensions -
masakari_instancemonitor_dimensions
* Fix source code paths in dev mode
Change-Id: I551f93c9bf1ad6712b53c316074ae1df84e4352b
Often cephadm jobs fail with:
Mar 30 13:01:21 primary bash[75459]: debug 2021-03-30T13:01:21.844+0000 7fa30431f700 -1 error: monitor data filesystem reached concerning levels of available storage space (available: 4% 1.8 GiB)
Let's check if 5G OSD helps and also print df -h output for reference
Change-Id: I6960fd0f378aea5a14a73d9228edf86fb86cac6c
We can't check this with timedatectl as it is not aware
of any "non-native" NTP daemon.
This could be a warning-level message but we don't have
such messages from the prechecks.
Closes-Bug: #1922721
Change-Id: I6db37576118cf5cff4ba7a63e179f0ab37467d22
Kolla Ansible supports configuration of the project used by Octavia to
communicate with other services, via octavia_service_auth_project. Until
Ussuri, this was set to admin. In Ussuri it changed to service. It may
also be set to a different value.
Kolla Ansible currently gives the octavia user the admin role in the
project, but it does not ensure that the project exists. For admin and
service projects, this is not a problem. If the project has been
customised however, it will not necessarily exist, which will cause
Octavia deployment to fail.
This change fixes the issue by ensuring that the service auth project
exists, in addition to the service project.
Closes-Bug: #1922100
Change-Id: I968efbf3ad1de676548b4e3aeefc20bf80ca94a0
host -> host_ip[0]
Remove deprecated configuration notification_topics.
WARNING oslo_config.cfg [-] Deprecated: Option "notification_topics"
from group "DEFAULT" is deprecated. Use option "topics" from
group "oslo_messaging_notifications".
[0]https://docs.openstack.org/cyborg/latest/configuration/sample-config.html
Change-Id: Ia5d53fb60d34c1509c6cdb905cbd0a93dd1c8b3d