Add "enable_prometheus_etcd_integration" configuration parameter which
can be used to configure Prometheus to scrape etcd metrics endpoints.
The default value of "enable_prometheus_etcd_integration" is set to
the combined values of "enable_prometheus" and "enable_etcd".
Change-Id: I7a0b802c5687e2d508e06baf55e355d9761e806f
While I8bb398e299aa68147004723a18d3a1ec459011e5 stopped setting
the net.ipv4.ip_forward sysctl, this change explicitly removes the
option from the Kolla sysctl config file. In the absence of another
source for this sysctl, it should revert to the default of 0 after the
next reboot.
A deployer looking to more aggressively change the value may set
neutron_l3_agent_host_ipv4_ip_forward to 0. Any deployments still
relying on the previous value may set
neutron_l3_agent_host_ipv4_ip_forward to 1.
Related-Bug: #1945453
Change-Id: I9b39307ad8d6c51e215fe3d3bc56aab998d218ec
Since [1] we are not running keepalived directly on CI network,
and are therefore safeguarded against such collisions.
[1] 8e40629161a329a22022d38b3cb48dea66121b36
Change-Id: Ie25b2d6d48f10c6b295795b3c82c1f8a213f2a8c
In Ironic jobs with Tenks, we saw issues with IPMI commands
failing, resuling in job failures:
Error setting Chassis Boot Parameter 5
A metal3.io commit [1] was found that fixes the issue by moving IPMI
retries from ironic to ipmitool, which has a side-effect of increasing
the timeout. This change applies the same configuration.
This change has been adapted from an analogous change in
kayobe-config-dev. [2]
[1] 6bc1499d8b
[2] Ib4fce74cebebe85c31049eafe2eeb6b28dfab041
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I552417b9da03b8dfc9406e0ff644092579bc7122
Ironic is dropping default_boot_option and the new default has
been around for quite a while now so let's remove this old
scary comment.
Change-Id: I80d645cb97251ac63e04d7ec1c87d4600d17d4ee
Since I30c2ad2bf2957ac544942aefae8898cdc8a61ec6 this container
is always enabled and thus the port should always be checked.
Change-Id: I94a70d89123611899872061bd69593280d0a68c4
Ironic has changed the default PXE to be iPXE (as opposed to plain
PXE) in Yoga. Kolla Ansible supports either one or the other and
we tend to stick to upstream defaults so this change enables
iPXE instead of plain PXE - by default - the users are allowed
to change back and they need to take one other action so it is
good to remind them via upgrade notes either way.
Change-Id: If14ec83670d2212906c6e22c7013c475f3c4748a
These configuration settings were removed in Grafana 6.2. Instead we can
use [remote_cache], but it is not required since it will use database
settings by default.
Change-Id: I37966027aea9039b2ecba4214444507e9d87f513
When OpenStack is deployed with Kolla-Ansible, by default there
are no durable queues or exchanges created by the OpenStack
services in RabbitMQ. In Rabbit terminology, not being durable
is referred to as `transient`, and this means that the queue
is generally held in memory.
Whether OpenStack services create durable or transient queues is
traditionally controlled by the Oslo Notification config option:
`amqp_durable_queues`. In Kolla-Ansible, this remains set to
the default of `False` in all services. The only `durable`
objects are the `amq*` exchanges which are internal to RabbitMQ.
More recently, Oslo Notification has introduced support for
Quorum queues [7]. These are a successor to durable classic
queues, however it isn't yet clear if they are a good fit for
OpenStack in general [8].
For clustered RabbitMQ deployments, Kolla-Ansible configures all
queues as `replicated` [1]. Replication occurs over all nodes
in the cluster. RabbitMQ refers to this as 'mirroring of classic
queues'.
In summary, this means that a multi-node Kolla-Ansible deployment
will end up with a large number of transient, mirrored queues
and exchanges. However, the RabbitMQ documentation warns against
this, stating that 'For replicated queues, the only reasonable
option is to use durable queues: [2]`. This is discussed
further in the following bug report: [3].
Whilst we could try enabling the `amqp_durable_queues` option
for each service (this is suggested in [4]), there are
a number of complexities with this approach, not limited to:
1) RabbitMQ is planning to remove classic queue mirroring in
favor of 'Quorum queues' in a forthcoming release [5].
2) Durable queues will be written to disk, which may cause
performance problems at scale. Note that this includes
Quorum queues which are always durable.
3) Potential for race conditions and other complexity
discussed recently on the mailing list under:
`[ops] [kolla] RabbitMQ High Availability`
The remaining option, proposed here, is to use classic
non-mirrored queues everywhere, and rely on services to recover
if the node hosting a queue or exchange they are using fails.
There is some discussion of this approach in [6]. The downside
of potential message loss needs to be weighed against the real
upsides of increasing the performance of RabbitMQ, and moving
to a configuration which is officially supported and hopefully
more stable. In the future, we can then consider promoting
specific queues to quorum queues, in cases where message loss
can result in failure states which are hard to recover from.
[1] https://www.rabbitmq.com/ha.html
[2] https://www.rabbitmq.com/queues.html
[3] https://github.com/rabbitmq/rabbitmq-server/issues/2045
[4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit
[5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/
[6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication
[7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933
[8] https://www.rabbitmq.com/quorum-queues.html#use-cases
Partial-Bug: #1954925
Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
The Prometheus HTTP API is reachable under /api/v1. Without this fix,
CloudKitty receives 404 errors from Prometheus.
Change-Id: Ie872da5ccddbcb8028b8b57022e2427372ed474e
This change adds an Ansible Galaxy requirements file including the
openstack.kolla collection. A new 'kolla-ansible install-deps' command
is provided to install the requirements.
With the new collection in place, this change also switches to using the
baremetal role from the openstack.kolla collection, and removes the
baremetal role from this repository.
Depends-On: https://review.opendev.org/c/openstack/ansible-collection-kolla/+/820168
Change-Id: I9708f57b4bb9d64eb4903c253684fe0d9147bd4a
Without this configuration, all mount points are reporting the same
utilisation metrics [1]. With the rslave option, all root mounts from
the host are visible in the container, so we can remove the bind mounts
for /proc and /sys.
[1] https://github.com/prometheus/node_exporter#docker
Change-Id: I4087dc81f9d1fa5daa24b9df6daf1f9e1ccd702f
Closes-Bug: #1961438
An FCD, also known as an Improved Virtual Disk (IVD) or
Managed Virtual Disk, is a named virtual disk independent of
a virtual machine. Using FCDs for Cinder volumes eliminates
the need for shadow virtual machines.
This patch adds Kolla support.
Change-Id: Ic0b66269e6d32762e786c95cf6da78cb201d2765
NSXP is the OpenStack support for the NSX Policy platform.
This is supported from neutron in the Stein version. This patch
adds Kolla support
This adds a new neutron_plugin_agent type 'vmware_nsxp'. The plugin
does not run any neutron agents.
Change-Id: I9e9d8f07e586bdc143d293e572031368af7f3fca