Make it easy to override the Keystone endpoints to support deploying
stand-alone Monasca which can integrate with an externally provided
Keystone instance.
Partially-Implements: blueprint monasca-roles
Change-Id: I9ae3b243c792ef88075702b47b62f164a1705c2e
This commit adds some filters which format logs so that they
can be correctly sent to the Monasca Log API by the Monasca
Fluentd plugin. In the future the Fluentd plugin could be
extended and this config could be removed.
Partially-Implements: blueprint monasca-roles
Change-Id: I87b6dfb3052d03f87349d30b66078c39d625195d
The Monasca Agent collects metrics and in this change is deployed
across the control plane. These metrics are collected into an OpenStack
project. It supports configuring a small number of plugins, which can
be extended in later commits. It also makes the Monasca Agent credentials
available to other roles, such as the common role to allow forwarding
logs to Monasca.
Partially-Implements: blueprint monasca-roles
Change-Id: I76b34fc5e1c76407a45fcf272268d5798b473ca2
Currently, the serial consoles as accessed through Horizon,
timeout after the haproxy_client_timeout (default: 1m) of
inactivity. This change allows you to set a larger timeout.
Change-Id: I2a9923cb69d5db976395146685aded83922c4120
Closes-Bug: #1800643
This patchset implements Neutron rolling upgrade logic as described
in [1].
Due to only neutron, vpnass and fwaas have supported for rolling upgrade
database migration, so I used the list "neutron_rolling_upgrade_services"
in neutron/default/main.yml for contain there services.
[1] https://docs.openstack.org/neutron/latest/contributor/internals/upgrade.html
Co-author: Ha Manh Dong <donghm@vn.fujitsu.com>
Change-Id: I2ed2f941d30d4df0d0f42c0d10e7ca03ec1c166a
Implements: blueprint apply-service-upgrade-procedure
Vitrage config is missing keystone_authtoken, it leads to CRITICAL
bug when deploy [1]
[1] http://paste.openstack.org/show/733507/
Change-Id: Ia89befed16bef5dbf0f542ea1a843b6b448079e9
That's because instead of start_container we need to
use recreate_or_restart_container in handler.
Change-Id: I3bb0a4c38b9024b2e2e26bfc06cb143bb5d35317
Signed-off-by: Pavel Glushchak <pglushchak@virtuozzo.com>
Two new parameters (migration_interface, migration_interface_address) to make
the use of a dedicated migration network possible.
Change-Id: I723c9bea9cf1881e02ba39d5318c090960c22c47
Introduce a job 'kolla-ansible-ubuntu-source-zun' to test kolla
with Zun enabled. To reduce CI resource, this job will be triggered
only if there are changes on the Zun's ansible roles.
Change-Id: I0ba207e1d3761da2d6992c5834d4f59e7e1d6628
Alarm service has been moved to Aodh for a long time [1].
Therefore, we should define evaluation_interval in
aodh.conf rather than ceilometer.conf. The interval value
should be configurable as well because we can use a
custom polling config now [2]
[1] https://review.openstack.org/#/c/200593/
[2] https://review.openstack.org/#/c/572013/
Change-Id: I7adeff2dff5d6d6ae4c621e84857347995e9203a
When Monasca is enabled disable direct logging to ElasticSearch and
send all logs harvested by Fluentd to the Monasca Log API.
This change also cleans up output files which may be left behind when
the various log forwarding options are enabled / disabled.
Partially-Implements: blueprint monasca-roles
Change-Id: I7197966c5117176407d60c86c08d3bcea5e8131a
Even though Kolla services are configured to log output to file rather than
stdout, some stdout still occurs when for example the container re(starts).
Since the Docker logs are not constrained in size, they can fill up the
docker volumes drive and bring down the host. One example of when this is
particularly problematic is when Fluentd cannot parse a log message. The
warning output is written to the Docker log and in production we have seen
it eat 100GB of disk space in less than a day. We could configure Fluentd
not to do this, but the problem may still occur via another mechanism.
Change-Id: Ia6d3935263a5909c71750b34eb69e72e6e558b7a
Closes-Bug: #1794249