53 Commits

Author SHA1 Message Date
Zuul
c0540760e0 Merge "monasca-thresh: Fix topology submission to storm" 2021-08-10 10:59:17 +00:00
Mark Goddard
ade5bfa302 Use ansible_facts to reference facts
By default, Ansible injects a variable for every fact, prefixed with
ansible_. This can result in a large number of variables for each host,
which at scale can incur a performance penalty. Ansible provides a
configuration option [0] that can be set to False to prevent this
injection of facts. In this case, facts should be referenced via
ansible_facts.<fact>.

This change updates all references to Ansible facts within Kolla Ansible
from using individual fact variables to using the items in the
ansible_facts dictionary. This allows users to disable fact variable
injection in their Ansible configuration, which may provide some
performance improvement.

This change disables fact variable injection in the ansible
configuration used in CI, to catch any attempts to use the injected
variables.

[0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars

Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1
Partially-Implements: blueprint performance-improvements
2021-06-23 10:38:06 +01:00
Scott Shambarger
aea9bf3550 monasca-thresh: Fix topology submission to storm
monasca-thresh currently runs a local copy of the storm
to handle the threshold topology.  However, it doesn't setup
the environment correctly, and the executable fails, causing
the container to continually restart.

This patch updates the container command to correctly
submit the topology to the running Apache storm.  The
container will exit after it finishes the submission,
so the restart_policy is updated to on-failure, this way
if the storm is temporarily unavailable, the submission
will be retried. (NOTE: further deploys will see the
container as "changed" as it won't be running)

Patch uses KOLLA_BOOTSTRAP to trigger the container to
check if the topology is already submitted, and if so skips
the submission command so the container doesn't fail.

The config task now triggers a new reconfigure handler that
spawns a one-shot container to replace any existing topology
if the configuration has changed.

Also, all the storm.* variables in storm.yml.j2 are
removed as they were only needed for local mode and
make submitted topologies fail to load when the storm
is restarted (the referenced directories not mounted
on nimbus).

Depends-On: https://review.opendev.org/c/openstack/kolla/+/792751
Closes-Bug: #1808805
Change-Id: Ib225d76076782d695c9387e1c2693bae9a4521d7
2021-06-06 13:41:29 -07:00
Doug Szumski
82cf40edf2 Remove Monasca Grafana service
In the Xena cycle it was decided to remove the Monasca
Grafana fork due to lack of maintenance. This commit removes
the service and provides a limited workaround using the
Monasca Grafana datasource with vanilla Grafana.

Depends-On: I9db7ec2df050fa20317d84f6cea40d1f5fd42e60
Change-Id: I4917ece1951084f6665722ba9a91d47764d3709a
2021-04-27 11:06:25 +00:00
Doug Szumski
444097848c Support disabling Monasca alerting pipeline
The Monasca alerting pipeline provides multi-tenancy alerts and
notifications. It runs as an Apache Storm topology and generally
places a significant memory and CPU burden on monitoring hosts,
particularly when there are lot of metrics. This is fine if the
alerting service is in use, but sometimes it is not. For example
you may use Prometheus for monitoring the control plane, and
wish to offer tenants a monitoring service via Monasca without
alerting and notification functionality. In this case it makes
sense to disable this part of the Monasca pipeline and this patch
adds support for that.

If the service is ever re-enabled, all alerts and notifications
should spawn back automatically since they are persisted in the
central mysql database cluster.

Change-Id: I84aa04125c621712f805f41c8efbc92c8e156db9
2021-03-04 09:19:44 +00:00
Doug Szumski
a52d661219 Disable Monasca Log Metrics service by default
The Log Metrics service is an admin only service. We now have
support in Fluentd via the Prometheus plugin to create metrics
from logs. These metrics can be scraped into Monasca or Prometheus.
It therefore makes sense to deprecate this service, starting by
disabling it by default, and then removing it in the Xena release.
This should improve the stability of the Monasca metrics pipeline
by ensuring that all metrics pass via the Monasca API for
validation, and ensure that metrics generated from logs are
available to both Prometheus and Monasca users by default.

Change-Id: I704feb4434c1eece3eb00c19dc5f934fd4bc27b4
2021-03-03 17:20:18 +00:00
Doug Szumski
0743a9bf4b Remove Monasca Log Transformer
Historically Monasca Log Transformer has been for log
standardisation and processing. For example, logs from different
sources may use slightly different error levels such as WARN, 5,
or WARNING. Monasca Log Transformer is a place where these could
be 'squashed' into a single error level to simplify log searches
based on labels such as these.

However, in Kolla Ansible, we do this processing in Fluentd so
that the simpler Fluentd -> Elastic -> Kibana pipeline also
benefits. This helps to avoid spreading out log parsing
configuration over many services, with the Fluentd Monasca output
plugin being yet another potential place for processing (which
should be avoided). It therefore makes sense to remove this
service entirely, and squash any existing configuration which
can't be moved to Fluentd into the Log Perister service. I.e.
by removing this pipeline, we don't loose any functionality,
we encourage log processing to take place in Fluentd, or at least
outside of Monasca, and we make significant gains in efficiency
by removing a topic from Kafka which contains a copy of all logs
in transit.

Finally, users forwarding logs from outside the control plane,
eg. from tenant instances, should be encouraged to process the
logs at the point of sending using whichever framework they are
forwarding them with. This makes sense, because all Logstash
configuration in Monasca is only accessible by control plane
admins. A user can't typically do any processing inside Monasca,
with or without this change.

Change-Id: I65c76d0d1cd488725e4233b7e75a11d03866095c
2021-03-03 17:20:18 +00:00
Doug Szumski
e689f951f4 Support explicit creation of Monasca Kafka topics
With this patch, Monasca no longer relies on automatic topic creation
in Kafka, and instead pre-creates all topics before bringing up the
containers. If the topic already exists then it will not be
changed, therefore existing users are not affected.

This patch allows per topic customisations, such as increasing the
number of partitions on particular topics and also works around
a race condition in automatic topic creation where multiple instances
of the same service could race to create a topic causing some of the
services to restart and throw an error before resuming normal
operation.

Change-Id: Ib15c95bb72cf79e9e55945d757b248e06f5f4065
2021-01-11 09:47:31 +00:00
Rafael Weingärtner
f425c0678f Standardize use and construction of endpoint URLs
The goal for this push request is to normalize the construction and use
 of internal, external, and admin URLs. While extending Kolla-ansible
 to enable a more flexible method to manage external URLs, we noticed
 that the same URL was constructed multiple times in different parts
 of the code. This can make it difficult for people that want to work
 with these URLs and create inconsistencies in a large code base with
 time. Therefore, we are proposing here the use of
 "single Kolla-ansible variable" per endpoint URL, which facilitates
 for people that are interested in overriding/extending these URLs.

As an example, we extended Kolla-ansible to facilitate the "override"
of public (external) URLs with the following standard
"<component/serviceName>.<companyBaseUrl>".
Therefore, the "NAT/redirect" in the SSL termination system (HAproxy,
HTTPD or some other) is done via the service name, and not by the port.
This allows operators to easily and automatically create more friendly
 URL names. To develop this feature, we first applied this patch that
 we are sending now to the community. We did that to reduce the surface
  of changes in Kolla-ansible.

Another example is the integration of Kolla-ansible and Consul, which
we also implemented internally, and also requires URLs changes.
Therefore, this PR is essential to reduce code duplicity, and to
facility users/developers to work/customize the services URLs.

Change-Id: I73d483e01476e779a5155b2e18dd5ea25f514e93
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
2020-08-19 07:22:17 +00:00
Mark Goddard
146b00efa7 Mount /etc/timezone based on host OS
Previously we mounted /etc/timezone if the kolla_base_distro is debian
or ubuntu. This would fail prechecks if debian or ubuntu images were
deployed on CentOS. While this is not a supported combination, for
correctness we should fix the condition to reference the host OS rather
than the container OS, since that is where the /etc/timezone file is
located.

Change-Id: Ifc252ae793e6974356fcdca810b373f362d24ba5
Closes-Bug: #1882553
2020-08-10 10:14:18 +01:00
Bartosz Bezak
17d8332604 Logstash 6 support
Co-Authored-By: Doug Szumski <doug@stackhpc.com>
Closes-Bug: #1884090
Depends-On: https://review.opendev.org/#/c/736768

Change-Id: If2d0dd1739e484b14e3c15a185a236918737b0ab
2020-07-15 08:54:53 +00:00
Doug Szumski
b39a0f805a Switch to Monasca API for logs
The Monasca Log API has been removed and in this change we switch
to using the unified API. If dedicated log APIs are required then
this can be supported through configuration. Out of the box the
Monasca API is used for both logs and metrics which is envisaged to
work for most use cases.

In order to use the unified API for logs, we need to disable the
legacy Kafka client. We also rename the Monasca API config file
to remove a warning about using the old style name.

Depends-On: https://review.opendev.org/#/c/728638
Change-Id: I9b6bf5b6690f4b4b3445e7d15a40e45dd42d2e84
2020-05-23 17:49:32 +01:00
Dincer Celik
4b5df0d866 Introduce /etc/timezone to Debian/Ubuntu containers
Some services look for /etc/timezone on Debian/Ubuntu, so we should
introduce it to the containers.

In addition, added prechecks for /etc/localtime and /etc/timezone.

Closes-Bug: #1821592
Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
2020-04-09 18:53:36 +00:00
James Kirsch
256322a8fe Construct service configuration urls using kolla_internal_fqdn
Service configuration urls should be constructed using
kolla_internal_fqdn instead of kolla_internal_vip_address. Otherwise SSL
validation will fail when certificates are issued using domain names.

Change-Id: I21689e22870c2f6206e37c60a3c33e19140f77ff
Closes-Bug: 1862419
2020-02-22 08:28:01 -08:00
Mark Goddard
9755c924be CentOS 8: Support variable image tag suffix
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).

To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.

Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
2020-01-10 09:56:04 +00:00
Radosław Piliszek
bc053c09c1 Implement IPv6 support in the control plane
Introduce kolla_address filter.
Introduce put_address_in_context filter.

Add AF config to vars.

Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]

Other changes:

globals.yml - mention just IP in comment

prechecks/port_checks (api_intf) - kolla_address handles validation

3x interface conditional (swift configs: replication/storage)

2x interface variable definition with hostname
(haproxy listens; api intf)

1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)

neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network

basic multinode source CI job for IPv6

prechecks for rabbitmq and qdrouterd use proper NSS database now

MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)

Ceph naming workaround in CI
TODO: probably needs documenting

RabbitMQ IPv6-only proto_dist

Ceph ms switch to IPv6 mode

Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)

haproxy upgrade checks for slaves based on ipv6 addresses

TODO:

ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.

ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.

rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.

ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.

KNOWN ISSUES (beyond us):

One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN

RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982

For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227

Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689

Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
2019-10-16 10:24:35 +02:00
Mark Goddard
741f6d9be9 Create and grant all keystone roles in service-ks-register
This ensures we execute the keystone os_* modules in one place.

Also rework some of the task names and loop item display.

Change-Id: I6764a71e8147410e7b24b0b73d0f92264f45240c
2019-09-24 08:50:04 +01:00
Mark Goddard
3522d235bd Refactor service, endpoint and user registration
Use upstream Ansible modules for registration of services, endpoints,
users, projects, roles, and role grants.

Change-Id: I7c9138d422cc91c177fd8992347176bb54156b5a
2019-09-17 10:13:56 -07:00
Isaac Prior
ff8c24d62a Removes monasca_grafana persistent volume
The monasca_grafana docker volume currently persists across container
builds, causing changes to installed plugins during build to be ignored.
This change deletes the volume entirely and forces plugin changes to be
applied via rebuild.

Change-Id: I36e62235a085e5c1955fdb5ae31f603be8ba69bf
2019-08-19 15:17:30 +01:00
Zuul
8f70bc22d6 Merge "Add extra volumes support for services that were not previously supported" 2019-08-05 09:02:04 +00:00
Rafael Weingärtner
97cb30cdd8 Cloudkitty InfluxDB Storage backend via Kolla-ansible
This proposal will add support to Kolla-Ansible for Cloudkitty
 InfluxDB storage system deployment. The feature of InfluxDB as the
 storage backend for Cloudkitty was created with the following commit
 https://github.com/openstack/cloudkitty/commit/
 c4758e78b49386145309a44623502f8095a2c7ee

Problem Description
===================

With the addition of support for InfluxDB in Cloudkitty, which is
achieving general availability via Stein release, we need a method to
easily configure/support this storage backend system via Kolla-ansible.

Kolla-ansible is already able to deploy and configure an InfluxDB
system. Therefore, this proposal will use the InfluxDB deployment
configured via Kolla-ansible to connect to CloudKitty and use it as a
storage backend.

If we do not provide a method for users (operators) to manage
Cloudkitty storage backend via Kolla-ansible, the user has to execute
these changes/configurations manually (or via some other set of
automated scripts), which creates distributed set of configuration
files, "configurations" scripts that have different versioning schemas
and life cycles.

Proposed Change
===============

Architecture
------------

We propose a flag that users can use to make Kolla-ansible configure
CloudKitty to use InfluxDB as the storage backend system. When
enabling this flag, Kolla-ansible will also enable the deployment of
the InfluxDB via Kolla-ansible automatically.

CloudKitty will be configured accordingly to [1] and [2]. We will also
externalize the "retention_policy", "use_ssl", and "insecure", to
allow fine granular configurations to operators. All of these
configurations will only be used when configured; therefore, when they
are not set, the default value/behavior defined in Cloudkitty will be
used. Moreover, when we configure "use_ssl" to "true", the user will
be able to set "cafile" to a custom trusted CA file. Again, if these
variables are not set, the default ones in Cloudkitty will be used.

Implementation
--------------
We need to introduce a new variable called
`cloudkitty_storage_backend`. Valid options are `sqlalchemy` or
`influxdb`. The default value in Kolla-ansible is `sqlalchemy` for
backward compatibility. Then, the first step is to change the
definition for the following variable:
`/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca |
bool }}"`

We also need to enable InfluxDB when CloudKitty is configured to use
it as the storage backend. Afterwards, we need to create tasks in
CloudKitty configurations to create the InfluxDB schema and configure
the configuration files accordingly.

Alternatives
------------
The alternative would be to execute the configurations manually or
handle it via a different set of scripts and configurations files,
which can become cumbersome with time.

Security Impact
---------------
None identified by the author of this spec

Notifications Impact
--------------------
Operators that are already deploying CloudKitty with InfluxDB as
storage backend would need to convert their configurations to
Kolla-ansible (if they wish to adopt Kolla-ansible to execute these
tasks).

Also, deployments (OpenStack environments) that were created with
Cloudkitty using storage v1 will need to migrate all of their data to
V2 before enabling InfluxDB as the storage system.

Other End User Impact
---------------------
None.

Performance Impact
------------------
None.

Other Deployer Impact
---------------------
New configuration options will be available for CloudKitty.
* cloudkitty_storage_backend
* cloudkitty_influxdb_retention_policy
* cloudkitty_influxdb_use_ssl
* cloudkitty_influxdb_cafile
* cloudkitty_influxdb_insecure_connections
* cloudkitty_influxdb_name

Developer Impact
----------------
None

Implementation
==============

Assignee
--------
* `Rafael Weingärtner <rafaelweingartne>`

Work Items
----------
 * Extend InfluxDB "enable/disable" variable
 * Add new tasks to configure Cloudkitty accordingly to these new
 variables that are presented above
 * Write documentation and release notes

Dependencies
============
None

Documentation Impact
====================
New documentation for the feature.

References
==========
[1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2`
[2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection`

Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
2019-07-02 11:14:05 -03:00
ZijianGuo
e610a73e98 Add extra volumes support for services that were not previously supported
We don't add extra volumes support for all services in patch [1].
In order to unify the management of the volume, so we need add extra volumes
support for these services.

[1] 12ff28a693

Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255
Partially-Implements: blueprint support-extra-volumes
Signed-off-by: ZijianGuo <guozijn@gmail.com>
2019-06-27 18:32:15 +08:00
Doug Szumski
76e98472f4 Supporting monitoring time synchronisation with Monasca
This plugin is useful for monitoring host clock synchronisation with
an NTP reference. If the delta becomes too large, the metrics from
this plugin can be used to trigger an alarm.

Change-Id: Id1fe6d7c823f8404c19c81ccdeb8b311bcb46e47
2019-06-07 10:54:50 +01:00
Doug Szumski
75d095b64e Automatically configure Monasca Grafana datasource
In Kolla, an OpenStack project is created to store logs and metrics
harvested from the control plane by Monasca. This commit enables
the Monasca Datasource in the Grafana organisation which maps to
this OpenStack control plane project. What this means in practice
is that if a user logs into Monasca Grafana, and has access to the
the control plane project, they will immediately be able to create
dashboards using data from Monasca which has been gathered from the
control plane.

Support to enable creation of this datasource for other OpenStack
projects can be added in a separate commit.

Partially-Implements: blueprint monasca-grafana
Change-Id: I03e741ddb1c582b7280c64637ed3e3683df6419b
2018-11-07 20:24:41 +00:00
Doug Szumski
712c89760c Add support for deploying Monasca Grafana
The Monasca Grafana fork allows users to log into Grafana with their
OpenStack user credentials and see metrics associated with their
OpenStack project. The long term goal is to enable Keystone support
in upstream Grafana, but this work seems to have stalled.

Partially-Implements: blueprint monasca-grafana
Change-Id: Icc04613b2571c094ae23b66d0bcc38b58c0ee4e1
2018-11-02 13:35:35 +00:00
Doug Szumski
6cbb5cbdb4 Support using external DBs in Monasca
This changes allows the user to configure a Monasca database
which may be different from the default database.

Partially-Implements: blueprint monasca-roles
Change-Id: Ia905190b8037ecb1782a758c0b65581fe9024bf6
2018-11-02 13:04:06 +00:00
Doug Szumski
843b7a3a9c Add missing project name for Monasca
Before this change the HAProxy configuration for Monasca printed:
  "Copying over haproxy haproxy config".
This change fixes the message.

TrivialFix
Change-Id: I2823775f6fa5b0be919b567f311c671f1469d19f
2018-11-02 13:04:06 +00:00
Doug Szumski
8cd2a793b4 Add initial documentation for Monasca
Partially-Implements: blueprint monasca-roles
Change-Id: I19db9396bb53b6db77cf97d7969ca24a55d8db0e
2018-11-02 13:04:05 +00:00
Doug Szumski
b7b45effed Support deploying the Monasca Agent
The Monasca Agent collects metrics and in this change is deployed
across the control plane. These metrics are collected into an OpenStack
project. It supports configuring a small number of plugins, which can
be extended in later commits. It also makes the Monasca Agent credentials
available to other roles, such as the common role to allow forwarding
logs to Monasca.

Partially-Implements: blueprint monasca-roles
Change-Id: I76b34fc5e1c76407a45fcf272268d5798b473ca2
2018-11-02 13:04:05 +00:00
Zuul
5a719217e4 Merge "Support using binary Logstash image" 2018-10-09 19:34:35 +00:00
Doug Szumski
0992c6b55a Support using binary Logstash image
To build the Monasca images, they need to be built from source, however,
this doesn't include Logstash. Logstash does not differentiate between
binary or source install types, so we can just use the default.

Partially-Implements: blueprint monasca-roles
Change-Id: I7eb355f25864f2b993780cd2b000c8bcd13f78e8
2018-10-09 16:04:51 +01:00
Zuul
867fe4c6b3 Merge "Improve registration of Monasca InfluxDB database" 2018-10-05 08:14:43 +00:00
Zuul
77d7ab20c2 Merge "Support configuring Monasca Persister performance" 2018-10-03 14:20:06 +00:00
Zuul
9355f17f2d Merge "Support deploying Monasca Persister" 2018-10-03 14:19:50 +00:00
Zuul
31a64d3543 Merge "Add some missing parameters for Monasca Notification" 2018-10-03 14:10:23 +00:00
Zuul
5d2d270eee Merge "Support deploying Monasca Notification engine" 2018-10-03 14:07:40 +00:00
Zuul
c969dac19d Merge "Support deploying Monasca Thresh" 2018-10-03 14:07:39 +00:00
Doug Szumski
2af1d1d95e Improve registration of Monasca InfluxDB database
Monasca is not yet compatible with InfluxDB > 1.1.10, which means
that the official Ansible modules for managing InfluxDB don't work [1].
We therefore fall back to manual commands to register the database
and a default retention policy.

[1] https://github.com/influxdata/influxdb-python
    #influxdb-pre-v110-users

Partially-Implements: blueprint monasca-roles
Change-Id: I59ceda1e7a6e945b13872089011045db04548b94
2018-09-26 10:54:43 +00:00
Doug Szumski
9c2e0b81d5 Support configuring Monasca Persister performance
On a single node deployment, the Monasca persister can
limit the rate at which Monasca can persist metrics to
InfluxDB. Increasing the thread count can remove this
bottle neck.

Partially-Implements: blueprint monasca-roles
Change-Id: I763a5ae6aa8c8ab3bf766ab5b58c386da74a188b
2018-09-26 10:54:43 +00:00
Doug Szumski
fddbbbbdc4 Support deploying Monasca Persister
The Monasca Persister reads metrics from Kafka and stores them
in a configurable time series database.

Change-Id: I8166b32bfb1583098ab8318a5f38d25bddb81e89
Partially-Implements: blueprint monasca-roles
2018-09-26 10:54:43 +00:00
Doug Szumski
c502e0b17a Add some missing parameters for Monasca Notification
Partially-Implements: blueprint monasca-roles
Change-Id: I21de7748156f8d3689ebfc29f2fc4dc5f7f36ddf
2018-09-26 10:54:42 +00:00
Doug Szumski
da1fa3f578 Support deploying Monasca Notification engine
The Monasca Notification engine generates alerts such as Slack
notifications from alerts.

Change-Id: I84861d5feefe6b6f38acc4dd71e94c386d40b562
Partially-Implements: blueprint monasca-roles
2018-09-26 10:54:42 +00:00
Doug Szumski
b6cce3e3f3 Support deploying Monasca Thresh
Monasca Thresh is a Storm topology which generates alerts from
metric streams according to alarms defined via the Monasca API.

This change runs the thresholder in local mode, which means that
the log output for the topology is directed to stdout and the
topology is restarted if the container is restarted. A future
change will improve the log collection and introduce a better
way of the checking the topology is running for multi-node
clusters.

Change-Id: I063dca5eead15f3cec009df62f0fc5d857dd4bb0
Partially-Implements: blueprint monasca-roles
2018-09-26 10:54:37 +00:00
Adam Harwell
f1c8136556 Refactor haproxy config (split by service) V2.0
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.

Available are two new templates:

* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend

For now the default will be the single listen block, for ease of
transition.

Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
2018-09-26 03:30:38 -07:00
Doug Szumski
1ae10375f7 Support deploying Monasca Log Metrics
The log metrics service generates metrics from log messages
which allows further analysis and alerting to be performed
on them. Basic configuration is provided so that metrics
are generated for high level warning logs such as error, or
warning.

Change-Id: I45cc17817c716296451f620f304c0b1108162a56
Partially-Implements: blueprint monasca-roles
2018-09-25 16:36:14 +00:00
Doug Szumski
01da938412 Support configuring Monasca log pipeline performance
Change-Id: Id8948fcf2d165f8285c7562e7aebd4145c4ff0db
Partially-Implements: blueprint monasca-roles
2018-09-25 11:41:29 +00:00
Lakshmi Prasanna Goutham Pratapa
14bf524756 Apply Resource Constraints to Services.
This commit is to apply resource-constraints to a few more OpenStack services.
Commit to  apply constraints to the last set of services will be made in
the upcoming commit.

Depends-on: Icafa54baca24d2de64238222a5677b9d8b90e2aa
Change-Id: I39004f54281f97d53dfa4b1dbcf248650ad6f186
2018-07-26 11:35:28 +00:00
Doug Szumski
5441963c9a Support deploying Monasca Log Persister
This is a Logstash component which reads processed logs from Kafka
and writes them to Elasticsearch (or some other backend supported by
Logstash).

Ingesting the logs from this service with Fluentd will be covered under
a different commit.

Change-Id: I2d722991ab2072c54c4715507b19a4c9279f921b
Partially-Implements: blueprint monasca-roles
2018-07-12 15:15:38 +01:00
Doug Szumski
9c88262ad9 Support deploying Monasca Log Transformer
The Monasca Log Transformer takes raw, unstandardised logs from one
Kafka topic, standardises them with whatever rules the operator wants
to use, and then writes them to a standardised logs topic in Kafka. It
is currently implemented as a Logstash config file.

Since Kolla does a fairly good job of standardising logs, this service
does very little processing. However, when other sources of logs
are used, it may be useful to add rules to the Transformer, particularly
if it's not possible to standardise the logs at source.

Ingesting the logs from this service with Fluentd will be covered under
a different commit.

Change-Id: I31cbb7e9a40a848391f517a56a67e3fd5bc12529
Partially-Implements: blueprint monasca-roles
2018-07-05 17:33:53 +01:00
Doug Szumski
b54ceef8bf Standardise Monasca Kafka variable name
Other lists of servers have the postfix _servers. To be consistent
this change uses the same format for Kafka.

Change-Id: Ia595f2ab485904e76fb76211f6715a7c019886ea
Partially-Implements: blueprint monasca-roles
2018-07-04 11:12:08 +01:00