The config.json template for neutron-ovn-metadata-agent uses a
hard-coded policy file name of policy.json. This prevents use of a
policy.yaml file with this service. This patch fixes this.
TrivialFix
Change-Id: Ib96d68f1dc60a0cbb5b79302c1face9c2272946a
In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
Followup on I91e5c1840ace8f567daf462c4eb3ec1f0c503823
When+run_once do not play nicely. [1]
The general workaround is to use include_tasks. [2]
However, it is very unlikely user wishes to run this role
without having any pacemaker nodes so the simplification that we
use throughout the Kolla Ansible code should be enough.
[1] https://github.com/ansible/ansible/issues/11496
[2] https://github.com/ansible/ansible/issues/11496#issuecomment-412936547
Change-Id: Ifaf64e3d9d89b2ec36a883fb7458556745b64802
If docker_configure_for_zun is set to true, then Zun-specific
configuration for Docker is applied to all nodes. It should only be
applied based on the relevant inventory groups. In some cases this can
cause Docker to fail to start. See
https://storyboard.openstack.org/#!/story/2008544 for details.
This change applies the configuration based on the zun-compute and
zun-cni-daemon groups. It also modifies the expression to not assume
that these groups exist in the inventory.
Change-Id: I0141abf0dd83e3a567ea6dcca945f86db129becf
Closes-Bug: #1914378
Story: 2008544
Task: 41645
Co-Authored-By: Buddhika Sanjeewa <bsanjeewa@kln.ac.lk>
The current behaviour is to support supplying a single
folder of Grafana dashboards which can then be populated
into a single folder in Grafana. Some users may wish
to have sub-folders of Dashboards, and load these into
separate dashboard folders in Grafana via a custom
provisioning file. For example, a user may have a
sub-folder of Ceph dashboards that they wish to keep
separate from OpenStack dashboards. This patch supports
sub-folders whilst not affecting the original mechanism.
Trivial-Fix
Change-Id: I9cd289a1ea79f00cee4d2ef30cbb508ac73f9767
- Replace hardcoded haproxy monitor user with variable.
- Rename mariadb_backup variable to mariadb_backup_possible.
- Drop creation of monitor user in handlers as this is
now handled in register.yml for good reason.
Change-Id: I255a79d36ae18ca42d0befd00b235ca509197db3
This change enables the use of Docker healthchecks for rabbitmq services.
Implements: blueprint container-health-check
Depends-On: https://review.opendev.org/c/openstack/kolla/+/784562
Change-Id: I23a2c2efab858b9ed39c6ce0ec4a82df10e7f93d
Adds HAcluster Ansible role. This role contains High Availability
clustering solution composed of Corosync, Pacemaker and Pacemaker Remote.
HAcluster is added as a helper role for Masakari which requires it for
its host monitoring, allowing to provide HA to instances on a failed
compute host.
Kolla hacluster images merged in [1].
[1] https://review.opendev.org/#/c/668765/
Change-Id: I91e5c1840ace8f567daf462c4eb3ec1f0c503823
Implements: blueprint ansible-pacemaker-support
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Kolla-ansible is currently installing mariadb
cluster on hosts defined in group['mariadb']
and render haproxy configuration for this hosts.
This is not enough if user want to have several
service databases in several mariadb clusters (shards).
Spread service databases to multiple clusters (shards)
is usefull especially for databases with high load
(neutron,nova).
How it works ?
It works exactly same as now, but group reference 'mariadb'
is now used as group where all mariadb clusters (shards)
are located, and mariadb clusters are installed to
dynamic groups created by group_by and host variable
'mariadb_shard_id'.
It also adding special user 'shard_X' which will be used
for creating users and databases, but only if haproxy
is not used as load-balance solution.
This patch will not affect user which has all databases
on same db cluster on hosts in group 'mariadb', host
variable 'mariadb_shard_id' is set to 0 if not defined.
Mariadb's task in loadbalancer.yml (haproxy) is configuring
mariadb default shard hosts as haproxy backends. If mariadb
role is used to install several clusters (shards), only
default one is loadbalanced via haproxy.
Mariadb's backup is working only for default shard (cluster)
when using haproxy as mariadb loadbalancer, if proxysql
is used, all shards are backuped.
After this patch will be merged, there will be way for proxysql
patches which will implement L7 SQL balancing based on
users and schemas.
Example of inventory:
[mariadb]
server1
server2
server3 mariadb_shard_id=1
server4 mariadb_shard_id=1
server5 mariadb_shard_id=2
server6 mariadb_shard_id=3
Extra:
wait_for_loadbalancer is removed instead of modified as its role
is served by check already. The relevant refactor is applied as
well.
Change-Id: I933067f22ecabc03247ea42baf04f19100dffd08
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
* Don't generate masakari.conf for instance monitor
* Don't generate masakari-monitors.conf for API or engine
* Use a consistent name for dimensions -
masakari_instancemonitor_dimensions
* Fix source code paths in dev mode
Change-Id: I551f93c9bf1ad6712b53c316074ae1df84e4352b
We can't check this with timedatectl as it is not aware
of any "non-native" NTP daemon.
This could be a warning-level message but we don't have
such messages from the prechecks.
Closes-Bug: #1922721
Change-Id: I6db37576118cf5cff4ba7a63e179f0ab37467d22
Kolla Ansible supports configuration of the project used by Octavia to
communicate with other services, via octavia_service_auth_project. Until
Ussuri, this was set to admin. In Ussuri it changed to service. It may
also be set to a different value.
Kolla Ansible currently gives the octavia user the admin role in the
project, but it does not ensure that the project exists. For admin and
service projects, this is not a problem. If the project has been
customised however, it will not necessarily exist, which will cause
Octavia deployment to fail.
This change fixes the issue by ensuring that the service auth project
exists, in addition to the service project.
Closes-Bug: #1922100
Change-Id: I968efbf3ad1de676548b4e3aeefc20bf80ca94a0
host -> host_ip[0]
Remove deprecated configuration notification_topics.
WARNING oslo_config.cfg [-] Deprecated: Option "notification_topics"
from group "DEFAULT" is deprecated. Use option "topics" from
group "oslo_messaging_notifications".
[0]https://docs.openstack.org/cyborg/latest/configuration/sample-config.html
Change-Id: Ia5d53fb60d34c1509c6cdb905cbd0a93dd1c8b3d