The Monasca Log API has been removed and in this change we switch
to using the unified API. If dedicated log APIs are required then
this can be supported through configuration. Out of the box the
Monasca API is used for both logs and metrics which is envisaged to
work for most use cases.
In order to use the unified API for logs, we need to disable the
legacy Kafka client. We also rename the Monasca API config file
to remove a warning about using the old style name.
Depends-On: https://review.opendev.org/#/c/728638
Change-Id: I9b6bf5b6690f4b4b3445e7d15a40e45dd42d2e84
Zun has a new component "zun-cni-daemon" which should be
deployed in every compute nodes. It is basically an implementation
of CNI (Container Network Interface) that performs the neutron
port binding.
If users is using the capsule (pod) API, the recommended deployment
option is using "cri" as capsule driver. This is basically to use
a CRI runtime (i.e. CRI plugin for containerd) for supporting
capsules (pods). A CRI runtime needs a CNI plugin which is what
the "zun-cni-daemon" provides.
The configuration is based on the Zun installation guide [1].
It consits of the following steps:
* Configure the containerd daemon in the host. The "zun-compute"
container will use grpc to communicate with this service.
* Install the "zun-cni" binary at host. The containerd process
will invoke this binary to call the CNI plugin.
* Run a "zun-cni-daemon" container. The "zun-cni" binary will
communicate with this container via HTTP.
Relevant patches:
Blueprint: https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime
Install guide: https://review.opendev.org/#/c/707948/
Devstack plugin: https://review.opendev.org/#/c/705338/
Kolla image: https://review.opendev.org/#/c/708273/
[1] https://docs.openstack.org/zun/latest/install/index.html
Depends-On: https://review.opendev.org/#/c/721044/
Change-Id: I9c361a99b355af27907cf80f5c88d97191193495
Both include_role and import_role expect role's name to be given
via "name" param instead of "role".
This worked but caused errors with ansible-lint.
See: https://review.opendev.org/694779
Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
This was used for release detection, but this was removed in
I5610cc7729e9311709147ba5532199a033dfd156.
Change-Id: Ife43b707b7f75e2cd8cbefac87a75cce6a5045d4
This change applys the HAProxy tag to the entire play, ensuring HAProxy
configuration is generated for all services when the HAProxy tag is
specified.
Change-Id: I67f57c831a713142d38c6e7b70f814a9ee8e5aae
Closes-Bug: #1855094
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
This commit follows up the work in Kolla to provide deploy and configure the
Prometheus blackbox exporter.
An example blackbox-exporter module has been added (disabled by default)
called os_endpoint. This allows for the probing of endpoints over HTTP
and HTTPS. This can be used to monitor that OpenStack endpoints return a status
code of either 200 or 300, and the word 'versions' in the payload.
This change introduces a new variable `prometheus_blackbox_exporter_endpoints`.
Currently no defaults are specified because the configuration is heavily
dependent on the deployment.
Co-authored-by: Jack Heskett <Jack.Heskett@gresearch.co.uk>
Change-Id: I36ad4961078d90e2fd70c9a3368f5157d6fd89cd
Since we use the release name as the default tag to publish images
to Dockerhub, we should use this by default.
This change also removes support for the magic value "auto".
Change-Id: I5610cc7729e9311709147ba5532199a033dfd156
Closes-Bug: #1843518
The project has been retired and there will be no Train release [1].
This patch removes Neutron LBaaS support in Kolla.
[1] https://review.opendev.org/#/c/658494/
Change-Id: Ic0d3da02b9556a34d8c27ca21a1ebb3af1f5d34c
Qinling is an OpenStack project to provide "Function as a Service".
This project aims to provide a platform to support serverless functions.
Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
Implements: blueprint ansible-qinling-support
Story: 2005760
Task: 33468
This patch implements the support for the elasticsearch-exporter in
kolla-ansible
The configuration and prechecks are reused from the other exporters
Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972
Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
Because kolla-ansible not have cyborg so should add it.
Implements: blueprint add-cyborg-to-kolla-ansible
Depend-On: I497e67e3a754fccfd2ef5a82f13ccfaf890a6fcd
Change-Id: I6f7ae86f855c5c64697607356d0ff3161f91b239
Vitrage Collector service has been removed from Vitrage in change:
Ie713456b2df96e24d0b15d2362a666162bfb4300.
Change-Id: I45023940c1d2573bfed49d4ce3fac16ed2d559e4
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
Co-Authored-By: Kien Nguyen <kiennt65@viettel.com.vn>
This patch implements the initial support for the
openstack-exporter[0] in the kolla-ansible
prometheus monitoring system.
The configuration and prechecks are reused from the other
exporters and a new template is provided for generating
a os-client-config file required by the exporter.
The default scrape interval is 60 seconds, but it can
be extended via a configuration option.
[0] https://github.com/Linaro/openstack-exporter
Change-Id: I4a34c4bb56e74b5cd544972cbd6540d9acb6e4a1
Prior to this change, when the --limit argument is used, each host in the
limit gathers facts for every other host. This is clearly unnecessary, and
can result in up to (N-1)^2 fact gathers.
This change gathers facts for each host only once. Hosts that are not in
the limit are divided between those that are in the limit, and facts are
gathered via delegation.
This change also factors out the fact gathering logic into a separate
playbook that is imported where necessary.
Change-Id: I923df5af41a7f1b7b0142d0da185a9a0979be543
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
We copy-paste the same play into various playbooks to detect
openstack_release. This change factors that code into a separate
playbook that is imported.
Change-Id: I5fea005642b960080bf5e43455618dc24766c386
The Monasca Agent collects metrics and in this change is deployed
across the control plane. These metrics are collected into an OpenStack
project. It supports configuring a small number of plugins, which can
be extended in later commits. It also makes the Monasca Agent credentials
available to other roles, such as the common role to allow forwarding
logs to Monasca.
Partially-Implements: blueprint monasca-roles
Change-Id: I76b34fc5e1c76407a45fcf272268d5798b473ca2
Storm is required for running the Monasca thresholder component for
generating alerts.
Change-Id: I5e1ef74dc55a787293abbb3e629b5ab1ce5f4bbb
Partially-Implements: blueprint monasca-roles
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b