This provides a generic mechanism to include extra files
that you can reference in prometheus.yml, for example:
scrape_targets:
- job_name: ipmi
params:
module: default
scrape_interval: 1m
scrape_timeout: 30s
metrics_path: /ipmi
scheme: http
file_sd_configs:
- files:
- /etc/prometheus/extras/file_sd/ipmi-exporter-targets.yml
refresh_interval: 5m
Change-Id: Ie2f085204b71725b901a179ee51541f1f383c6fa
Related: blueprint custom-prometheus-targets
This provides a mechanism to scrape targets defined outside of kolla-ansible.
Depends-On: https://review.opendev.org/#/c/685671/
Change-Id: I0950341b147bb374b4128f09f807ef5a756f5dfa
Related: blueprint custom-prometheus-targets
Refactor service configuration to use the copy certificates task. This
reduces code duplication and simplifies implementing encrypting backend
HAProxy traffic for individual services.
Change-Id: I0474324b60a5f792ef5210ab336639edf7a8cd9e
Some services look for /etc/timezone on Debian/Ubuntu, so we should
introduce it to the containers.
In addition, added prechecks for /etc/localtime and /etc/timezone.
Closes-Bug: #1821592
Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
This is useful to people who manage their Prometheus Server
externally to Kolla Ansible, or want to use the exporters with
another framework such as Monasca.
Change-Id: Ie3f61e2e186c8e77e21a7b53d2bd7d2a27eee18e
When change the cert file in /etc/kolla/certificate/.
The certificate in the container has not changed.
So I think can use kolla-ansible deploy when certificate is
changed. restart <container>
Partially-Implements: blueprint custom-cacerts
Change-Id: Iaac6f37e85ffdc0352e8062ae5049cc9a6b3db26
Signed-off-by: yj.bai <bai.yongjun@99cloud.net>
Both include_role and import_role expect role's name to be given
via "name" param instead of "role".
This worked but caused errors with ansible-lint.
See: https://review.opendev.org/694779
Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
We assume that all groups are present in the inventory, and quite obtuse
errors can result if any are not.
This change adds a precheck that checks for the presence of all expected
groups in the inventory for each service. It also introduces a common
service-precheck role that we can use for other common prechecks.
Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3
Partially-Implements: blueprint improve-prechecks
Service REST API urls should be constructed using the
{{ internal_protocol }} and {{ external_protocol }} configuration
parameters.
Change-Id: Id1e8098cf59f66aa35b371149fdb4b517fa4c908
Closes-Bug: 1862817
Service configuration urls should be constructed using
kolla_internal_fqdn instead of kolla_internal_vip_address. Otherwise SSL
validation will fail when certificates are issued using domain names.
Change-Id: I21689e22870c2f6206e37c60a3c33e19140f77ff
Closes-Bug: 1862419
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
When kolla_copy_ca_into_containers is set to "yes", the Certificate
Authority in /etc/kolla/certificates will be copied into service
containers to enable trust for that CA. This is especially useful when
the CA is self signed, and would not be trusted by default.
Partially-Implements: blueprint custom-cacerts
Change-Id: I4368f8994147580460ebe7533850cf63a419d0b4
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).
To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.
Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
In a deployment where Prometheus is enabled and
Alertmanager is disabled the task "Copying over
prometheus config file" in
'ansible/roles/prometheus/tasks/config.yml' will
fail to template the Prometheus configuration file
'ansible/roles/prometheus/templates/prometheus.yml.j2'
as the variable 'prometheus_alert_rules' does not
contain the key 'files'. This commit fixes this bug.
Change-Id: Idbe1e52dd3693a6f168d475f9230a253dae64480
Closes-Bug: #1854540
As part of the effort to implement Ansible code linting in CI
(using ansible-lint) - we need to implement recommendations from
ansible-lint output [1].
One of them is to stop using local_action in favor of delegate_to -
to increase readability and and match the style of typical ansible
tasks.
[1]: https://review.opendev.org/694779/
Partially implements: blueprint ansible-lint
Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
This change introduces the way to pass extra options to prometheus.
Currently, prometheus runs with nearly default options, and when clouds
start getting bigger, you need to pass extra parameters to prometheus.
Change-Id: Ic773c0b73062cf3b2285343bafb25d5923911834
This commit follows up the work in Kolla to provide deploy and configure the
Prometheus blackbox exporter.
An example blackbox-exporter module has been added (disabled by default)
called os_endpoint. This allows for the probing of endpoints over HTTP
and HTTPS. This can be used to monitor that OpenStack endpoints return a status
code of either 200 or 300, and the word 'versions' in the payload.
This change introduces a new variable `prometheus_blackbox_exporter_endpoints`.
Currently no defaults are specified because the configuration is heavily
dependent on the deployment.
Co-authored-by: Jack Heskett <Jack.Heskett@gresearch.co.uk>
Change-Id: I36ad4961078d90e2fd70c9a3368f5157d6fd89cd
Edited the
ansible/roles/prometheus/templates/prometheus-alertmanager.json.j2 file
to change the mesh.peer and mesh.listen-address to cluter.peer and
cluster.listen-address. This stopped alertmanager from crashing with
error "--mesh.peer is an invalid flag"
Change-Id: Ia0447674b9ec377a814f37b70b4863a2bd1348ce
Signed-off-by: Mark Flynn <markandrewflynn@gmail.com>
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
We don't add extra volumes support for all services in patch [1].
In order to unify the management of the volume, so we need add extra volumes
support for these services.
[1] 12ff28a693
Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255
Partially-Implements: blueprint support-extra-volumes
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
This change ensures that URLs returned from these services reference
the HAProxy endpoint, rather than the host on which the service is
running.
Closes-Bug: #1825150
Change-Id: I7f966ff749ea37620f1bde7019a598cb9505fa45
Several config file permissions are incorrect on the host. In general,
files should be 0660, and directories and executables 0770.
Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
Closes-Bug: #1821579
All Prometheus services should use the Prometheus install type which
defaults to the Kolla install type, rather than directly using the
Kolla install type.
Change-Id: Ieaa924986dff33d4cf4a90991a8f34534cfc3468
This patch implements the support for the elasticsearch-exporter in
kolla-ansible
The configuration and prechecks are reused from the other exporters
Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972
Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
The patch that this depends on in the Kolla repo updates various Prometheus
exporters. In some cases the command line syntax has changed which prevents
them from starting. This commit updates the command line syntax in-line with
the new versions.
Depends-On: I846989b16fa7f76b11b309b7a9764cec8aaf538d
Change-Id: I1c8c56059e51442d7bf2248b9632021cb529b4ba
This allows an operator to pin the Prometheus docker image
tag for all Prometheus images to that specified by the `prometheus_tag`.
Without this change, the alert manager and cadvisor tags would also need
to be set.
Change-Id: Iadef001af7d3be5b2a39ce5e2363d05a33a775e4
This patch implements the initial support for the
openstack-exporter[0] in the kolla-ansible
prometheus monitoring system.
The configuration and prechecks are reused from the other
exporters and a new template is provided for generating
a os-client-config file required by the exporter.
The default scrape interval is 60 seconds, but it can
be extended via a configuration option.
[0] https://github.com/Linaro/openstack-exporter
Change-Id: I4a34c4bb56e74b5cd544972cbd6540d9acb6e4a1
Vitrage has already supported Prometheus as
datasource. Kolla can config it automatically,
just need a little changes, for example in
wsgi config file [1].
Co-Authored-By: Hieu LE <hieulq2@viettel.com.vn>
[1] https://review.openstack.org/#/c/584649/8/devstack/apache-vitrage.template
Change-Id: I64028a0dfd9887813b980a31c30c2c1b1046da61
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b