Change Ib225d76076782d695c9387e1c2693bae9a4521d7 introduced a new
upgrade task for monasca-thresh. Because the task is not restricted to
the correct group, it fails while trying to stop monasca_thresh on hosts
not running this container.
Change-Id: I33c2c458a98145315b0de0c069f13b83f59622eb
Closes-Bug: #1952408
monasca-thresh currently runs a local copy of the storm
to handle the threshold topology. However, it doesn't setup
the environment correctly, and the executable fails, causing
the container to continually restart.
This patch updates the container command to correctly
submit the topology to the running Apache storm. The
container will exit after it finishes the submission,
so the restart_policy is updated to on-failure, this way
if the storm is temporarily unavailable, the submission
will be retried. (NOTE: further deploys will see the
container as "changed" as it won't be running)
Patch uses KOLLA_BOOTSTRAP to trigger the container to
check if the topology is already submitted, and if so skips
the submission command so the container doesn't fail.
The config task now triggers a new reconfigure handler that
spawns a one-shot container to replace any existing topology
if the configuration has changed.
Also, all the storm.* variables in storm.yml.j2 are
removed as they were only needed for local mode and
make submitted topologies fail to load when the storm
is restarted (the referenced directories not mounted
on nimbus).
Depends-On: https://review.opendev.org/c/openstack/kolla/+/792751
Closes-Bug: #1808805
Change-Id: Ib225d76076782d695c9387e1c2693bae9a4521d7
In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.
Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
Historically Monasca Log Transformer has been for log
standardisation and processing. For example, logs from different
sources may use slightly different error levels such as WARN, 5,
or WARNING. Monasca Log Transformer is a place where these could
be 'squashed' into a single error level to simplify log searches
based on labels such as these.
However, in Kolla Ansible, we do this processing in Fluentd so
that the simpler Fluentd -> Elastic -> Kibana pipeline also
benefits. This helps to avoid spreading out log parsing
configuration over many services, with the Fluentd Monasca output
plugin being yet another potential place for processing (which
should be avoided). It therefore makes sense to remove this
service entirely, and squash any existing configuration which
can't be moved to Fluentd into the Log Perister service. I.e.
by removing this pipeline, we don't loose any functionality,
we encourage log processing to take place in Fluentd, or at least
outside of Monasca, and we make significant gains in efficiency
by removing a topic from Kafka which contains a copy of all logs
in transit.
Finally, users forwarding logs from outside the control plane,
eg. from tenant instances, should be encouraged to process the
logs at the point of sending using whichever framework they are
forwarding them with. This makes sense, because all Logstash
configuration in Monasca is only accessible by control plane
admins. A user can't typically do any processing inside Monasca,
with or without this change.
Change-Id: I65c76d0d1cd488725e4233b7e75a11d03866095c
The task "Stopping all Monasca Grafana instances but the first node"
can fail with:
error while evaluating conditional (monasca_grafana_differs['result']): 'dict object' has no attribute 'result'
This is fixed by running this task on the same set of hosts than the
task defining monasca_grafana_differs, i.e. groups['monasca-grafana'].
Change-Id: I6ad0256fb2a3cdc91dddf441e5e1c41f4ac69017
Closes-Bug: #1907689
Config plays do not need to check containers. This avoids skipping
tasks during the genconfig action.
Ironic and Glance rolling upgrades are handled specially.
Swift and Bifrost do not use the handlers at all.
Partially-Implements: blueprint performance-improvements
Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the register.yml and bootstrap.yml
includes, all of the tasks in the included file use run_once: True.
The run_once flag improves performance at scale drastically, so
importing these tasks unconditionally will have a lower overhead than a
conditional include task. It therefore makes sense to switch to use
import_tasks there.
See [1] for benchmarks of run_once.
[1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/run-once.md
Change-Id: Ic67631ca3ea3fb2081a6f8978e85b1522522d40d
Partially-Implements: blueprint performance-improvements
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. For unconditionally included tasks, switching to
import_tasks provides a clear benefit.
Benchmarking of include vs. import is available at [1].
This change switches from include_tasks to import_tasks where there is
no condition applied to the include.
[1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/include-and-import.md#task-include-and-import
Partially-Implements: blueprint performance-improvements
Change-Id: Ia45af4a198e422773d9f009c7f7b2e32ce9e3b97
This fixes an issue where multiple Grafana instances would race
to bootstrap the Grafana DB. The following changes are made:
- Only start additional Grafana instances after the DB has been
configured.
- During upgrade, don't allow old instances to run with an
upgraded DB schema.
Change-Id: I3e0e077ba6a6f43667df042eb593107418a06c39
Closes-Bug: #1888681
The Monasca Log API has been removed and in this change we switch
to using the unified API. If dedicated log APIs are required then
this can be supported through configuration. Out of the box the
Monasca API is used for both logs and metrics which is envisaged to
work for most use cases.
In order to use the unified API for logs, we need to disable the
legacy Kafka client. We also rename the Monasca API config file
to remove a warning about using the old style name.
Depends-On: https://review.opendev.org/#/c/728638
Change-Id: I9b6bf5b6690f4b4b3445e7d15a40e45dd42d2e84
Currently the database is only synced during deployment. This change
performs the sync during upgrade as well.
Change-Id: Ia45fc733a1ab69de9d4762f5d9c8767041eeaed3
Closes-Bug: #1832020
Deploys the Monasca API with mod_wsgi + Apache.
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Partially-Implements: blueprint monasca-roles
Change-Id: I3e03762217fbef1fb0cbff6239abb109cbec226b