This change enables the use of Docker healthchecks for zookeeper services.
Implements: blueprint container-health-check
Change-Id: I344115580840f3aa3f872c6136492f7bd7b73db8
Adds HAcluster Ansible role. This role contains High Availability
clustering solution composed of Corosync, Pacemaker and Pacemaker Remote.
HAcluster is added as a helper role for Masakari which requires it for
its host monitoring, allowing to provide HA to instances on a failed
compute host.
Kolla hacluster images merged in [1].
[1] https://review.opendev.org/#/c/668765/
Change-Id: I91e5c1840ace8f567daf462c4eb3ec1f0c503823
Implements: blueprint ansible-pacemaker-support
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Kolla-ansible is currently installing mariadb
cluster on hosts defined in group['mariadb']
and render haproxy configuration for this hosts.
This is not enough if user want to have several
service databases in several mariadb clusters (shards).
Spread service databases to multiple clusters (shards)
is usefull especially for databases with high load
(neutron,nova).
How it works ?
It works exactly same as now, but group reference 'mariadb'
is now used as group where all mariadb clusters (shards)
are located, and mariadb clusters are installed to
dynamic groups created by group_by and host variable
'mariadb_shard_id'.
It also adding special user 'shard_X' which will be used
for creating users and databases, but only if haproxy
is not used as load-balance solution.
This patch will not affect user which has all databases
on same db cluster on hosts in group 'mariadb', host
variable 'mariadb_shard_id' is set to 0 if not defined.
Mariadb's task in loadbalancer.yml (haproxy) is configuring
mariadb default shard hosts as haproxy backends. If mariadb
role is used to install several clusters (shards), only
default one is loadbalanced via haproxy.
Mariadb's backup is working only for default shard (cluster)
when using haproxy as mariadb loadbalancer, if proxysql
is used, all shards are backuped.
After this patch will be merged, there will be way for proxysql
patches which will implement L7 SQL balancing based on
users and schemas.
Example of inventory:
[mariadb]
server1
server2
server3 mariadb_shard_id=1
server4 mariadb_shard_id=1
server5 mariadb_shard_id=2
server6 mariadb_shard_id=3
Extra:
wait_for_loadbalancer is removed instead of modified as its role
is served by check already. The relevant refactor is applied as
well.
Change-Id: I933067f22ecabc03247ea42baf04f19100dffd08
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
* Don't generate masakari.conf for instance monitor
* Don't generate masakari-monitors.conf for API or engine
* Use a consistent name for dimensions -
masakari_instancemonitor_dimensions
* Fix source code paths in dev mode
Change-Id: I551f93c9bf1ad6712b53c316074ae1df84e4352b
We can't check this with timedatectl as it is not aware
of any "non-native" NTP daemon.
This could be a warning-level message but we don't have
such messages from the prechecks.
Closes-Bug: #1922721
Change-Id: I6db37576118cf5cff4ba7a63e179f0ab37467d22
Kolla Ansible supports configuration of the project used by Octavia to
communicate with other services, via octavia_service_auth_project. Until
Ussuri, this was set to admin. In Ussuri it changed to service. It may
also be set to a different value.
Kolla Ansible currently gives the octavia user the admin role in the
project, but it does not ensure that the project exists. For admin and
service projects, this is not a problem. If the project has been
customised however, it will not necessarily exist, which will cause
Octavia deployment to fail.
This change fixes the issue by ensuring that the service auth project
exists, in addition to the service project.
Closes-Bug: #1922100
Change-Id: I968efbf3ad1de676548b4e3aeefc20bf80ca94a0
This change enables the use of Docker healthchecks for memcached services.
Implements: blueprint container-health-check
Change-Id: I571e6d6cac634fd86429e12b946d6f7b4a2ab02c
This change enables the use of Docker healthchecks for mistral services.
Implements: blueprint container-health-check
Change-Id: I5c60d22936d0e2e92fc9e6bcbf0a869d3f0b1687
host -> host_ip[0]
Remove deprecated configuration notification_topics.
WARNING oslo_config.cfg [-] Deprecated: Option "notification_topics"
from group "DEFAULT" is deprecated. Use option "topics" from
group "oslo_messaging_notifications".
[0]https://docs.openstack.org/cyborg/latest/configuration/sample-config.html
Change-Id: Ia5d53fb60d34c1509c6cdb905cbd0a93dd1c8b3d
For using 3rd party Octavia providers (such as OVN provider) an
octavia-driver-agent container must be running to expose those providers to
use.
OVN CI job has been extended with deploying Octavia and testing OVN Load
Balancer.
Closes-Bug: #1903506
Depends-On: https://review.opendev.org/c/openstack/kolla/+/771191
Change-Id: Ibafa8b7307981f2a51e630cc113d18af6162171c
One use case for this is so that you can generate config in a CI job
without access to the container repository. It also removes the
dependency of having docker configured for config generation.
TrivialFix
Change-Id: I0d388851c8b953af0494e44ae569e7eb9e15c326