During an upgrade, nova pins the version of RPC calls to the minimum
seen across all services. This ensures that old services do not receive
data they cannot handle. After the upgrade is complete, all nova
services are supposed to be reloaded via SIGHUP to cause them to check
again the RPC versions of services and use the new latest version which
should now be supported by all running services.
Due to a bug [1] in oslo.service, sending services SIGHUP is currently
broken. We replaced the HUP with a restart for the nova_compute
container for bug 1821362, but not other nova services. It seems we need
to restart all nova services to allow the RPC version pin to be removed.
Testing in a Queens to Rocky upgrade, we find the following in the logs:
Automatically selected compute RPC version 5.0 from minimum service
version 30
However, the service version in Rocky is 35.
There is a second issue in that it takes some time for the upgraded
services to update the nova services database table with their new
version. We need to wait until all nova-compute services have done this
before the restart is performed, otherwise the RPC version cap will
remain in place. There is currently no interface in nova available for
checking these versions [2], so as a workaround we use a configurable
delay with a default duration of 30 seconds. Testing showed it takes
about 10 seconds for the version to be updated, so this gives us some
headroom.
This change restarts all nova services after an upgrade, after a 30
second delay.
[1] https://bugs.launchpad.net/oslo.service/+bug/1715374
[2] https://bugs.launchpad.net/nova/+bug/1833542
Change-Id: Ia6fc9011ee6f5461f40a1307b72709d769814a79
Closes-Bug: #1833069
Related-Bug: #1833542
They are used only to obtain keys for the next task.
Change-Id: I2fac22af4710b70e4df8e3a272bcfb6cc8b8532e
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
The Hitachi NAS Platform iSCSI driver was marked as not supported by
Cinder in the Ocata realease[1].
[1] https://review.opendev.org/#/c/444287/
Change-Id: I1a25789374fddaefc57bc59badec06f91ee6a52a
Closes-Bug: #1832821
In some cases, we can mount extra volumes for gnocchi to facilitate
integration.
Change-Id: Ife475ca7d0555562f6e3ef0867835d69d288c8c4
Signed-off-by: ZijianGuo <guozijn@gmail.com>
"Check if policies shall be overwritten" already exists in its
newer form. The removed one had no effect on play.
Change-Id: I48ed6c1c71c4162a3ab28ab2b51dc1e02932dfef
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Actually, 'mongodb.conf' is a yaml format configuration file. Do not use
merge_configs to merge it.
Change-Id: Id3c006df00c1e2d66472c2195781e01c640cab22
Signed-off-by: ZijianGuo <guozijn@gmail.com>
The TSI is recommended for all users. Some of the key benefits are
a reduction in memory requirements and an increase in the maximum
number of time series. For more information see this link:
https://docs.influxdata.com/influxdb/v1.7/concepts/tsi-details/
Change-Id: I4b29eb5a4ae82f6c39059d0b6de41debdfd75508
Since this review[1], Qinling supports WSGI execution.
From a production perspective, Qinling should be deployed
using Apache and mod_wsgi.
"api_worker" option is not needed anymore because processes will
be handle by Apache mod_wsgi.
Qinling Docker image review[2] has ben created.
[1] https://review.opendev.org/661851
[2] https://review.opendev.org/666647
Change-Id: I9aaee4c2932f1e4ea9fe780a64e96a28fa6bccfb
Story: 2005920
Task: 34181
The "environment" variable set in config.yml and handlers/main.yml
has been removed to fix de deployment and the reconfigure.
Change-Id: I912cadb5113d5572235731863825588b2eb12759
This change defaults freezer to use mariadb as default backend for database
and adds elasticsearch as an optional backend due to the requirement of
freezer to use elasticsearch version 2.3.0. The default elasticsearch in
kolla-ansible is 5.6.x and that doesn't work with freezer.
Added needed options to the elasticsearch backend like:
- protocol
- address
- port
- number of replicas
Change-Id: I88616c285bdb297fd1f738846ddffe1b08a7a827
Signed-off-by: Marek Svensson <marek@marex.st>
This change formats internal Fluent logs in a similar way to other
logs. It makes it easier for a user to identify issues with Fluent
parsing logs. Any failure to parse a log will be ingested into the
logging framework and can easily be located by searching for
'pattern not match' or by filtering for Fluent log warnings.
Change-Id: Iea6d12c07a2f4152f2038d3de2ef589479b3332b
* When using redis as the backend of osprofiler, it cannot connect to
redis because the redis_connection_string is incorrect.
* Let other places that use redis also use this variable.
Change-Id: I14de6597932d05cd7f804a35c6764ba4ae9087cd
Closes-Bug: #1833200
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Kolla service logs which don't match a Fluentd rewriterule get dropped.
This change prevents that by tagging them with 'unmatched'.
Change-Id: I0a2484d878d5c86977fb232a57c52f874ca7a34c
Monasca Python service logs prior to this change were being dropped
due to missing entries in the Fluent record_transformer config file.
This change adds support for ingesting those logs, and explicitly
removes support for ingesting Monasca Log API logs to reduce the risk
of feedback, for example if debug logging is turned on in the Monasca
Log API.
Change-Id: I9e3436a8f946873867900eed5ff0643d84584358
Presently, errors can appear in Fluentd and Monasca Log API logs due
to log output from some Monasca services, which do not use Oslo log,
being processed alongside other OpenStack logs which do.
This change parses these log files separately to prevent these errors.
Change-Id: Ie3cbb51424989b01727b5ebaaeba032767073462
Since we have different upgrade paths, we must use the actually
installed Ceph release name when doing require-osd-release
Closes-Bug: #1832989
Change-Id: I6aaa4b4ac0fb739f7ad885c13f55b6db969996a2
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
The task does not change any state but is used to set a fact
from parsed output.
Also adjust task name.
Change-Id: I5fe322546d82a373522645485be18fe7bfc57999
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
In a rare event both kolla-ansible and nova-scheduler try to do
the mapping at the same time and one of them fails.
Since kolla-ansible runs host discovery on each deployment,
there is no need to change the default of no periodic host discovery.
I added some notes for future. They are not critical.
I made the decision explicit in the comments.
I changed the task name to satisfy recommendations.
I removed the variable because it is not used (to avoid future doubts).
Closes-Bug: #1832987
Change-Id: I3128472f028a2dbd7ace02abc179a9629ad74ceb
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
If we change mongodb_port, the command for bootstraping mongodb
should not connect to the default mongodb port 27017.
Change-Id: I330999be577d6416df162ea33fa1f7a19df56029
The task was duplicated below (and this other one is conditional).
Additionally fix related tasks names.
Change-Id: I76a6dd84e78277f87b04951eb4e75bbdfc1c38bf
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>