This is necessary for some Ansible tests which were renamed in 2.5 -
including 'version' and 'successful'.
Change-Id: Iacf88ef5589c7571fcf56ba8b99d3dbe76975195
otherwise I'm seeing:
TASK [monasca : Creating the monasca agent user] ****************************************************************************************************************************
fatal: [monitor1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.16.3.24 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n F
ile \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 163, in <module>\r\n main()\r\n File \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 141,
in main\r\n output = client.exec_start(job)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/decorators.py\", line 19, in wrapped\r\n
return f(self, resource_id, *args, **kwargs)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/exec_api.py\", line 165, in exec_start\r\
n return self._read_from_socket(res, stream, tty)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/client.py\", line 377, in _read_from_
socket\r\n return six.binary_type().join(gen)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 75, in frames_iter\r\
n n = next_frame_size(socket)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n", "msg": "MODULE FAILURE", "rc": 1}
when the monitoring nodes aren't on the public API network.
Change-Id: I7a93f69da0e02c9264da0b081d2e60626f899e3a
Previously we sourced this script in tests/deploy.sh, but this was
recently changed. Following that change we lost the errexit setting,
meaning we ignore errors in init-runonce.
Adding errexit in the script itself means that all callers get error
handling.
Also log init-runonce output.
TrivialFix
Change-Id: I9b35bd5f0f76eec26ddd968d093a3a5fd55a7ce2
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
Kolla has it already and kolla-ansible should.
Patch to backport as far as pike.
Affects only stable branches.
Change-Id: Iecc46b364ad9fc69fe67dd09ee1b4e3c5511f01c
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
During an upgrade, nova pins the version of RPC calls to the minimum
seen across all services. This ensures that old services do not receive
data they cannot handle. After the upgrade is complete, all nova
services are supposed to be reloaded via SIGHUP to cause them to check
again the RPC versions of services and use the new latest version which
should now be supported by all running services.
Due to a bug [1] in oslo.service, sending services SIGHUP is currently
broken. We replaced the HUP with a restart for the nova_compute
container for bug 1821362, but not other nova services. It seems we need
to restart all nova services to allow the RPC version pin to be removed.
Testing in a Queens to Rocky upgrade, we find the following in the logs:
Automatically selected compute RPC version 5.0 from minimum service
version 30
However, the service version in Rocky is 35.
There is a second issue in that it takes some time for the upgraded
services to update the nova services database table with their new
version. We need to wait until all nova-compute services have done this
before the restart is performed, otherwise the RPC version cap will
remain in place. There is currently no interface in nova available for
checking these versions [2], so as a workaround we use a configurable
delay with a default duration of 30 seconds. Testing showed it takes
about 10 seconds for the version to be updated, so this gives us some
headroom.
This change restarts all nova services after an upgrade, after a 30
second delay.
[1] https://bugs.launchpad.net/oslo.service/+bug/1715374
[2] https://bugs.launchpad.net/nova/+bug/1833542
Change-Id: Ia6fc9011ee6f5461f40a1307b72709d769814a79
Closes-Bug: #1833069
Related-Bug: #1833542
When running deploy or reconfigure for Keystone,
ansible/roles/keystone/tasks/deploy.yml calls init_fernet.yml,
which runs /usr/bin/fernet-rotate.sh, which calls keystone-manage
fernet_rotate.
This means that a token can become invalid if the operator runs
deploy or reconfigure too often.
This change splits out fernet-push.sh from the fernet-rotate.sh
script, then calls fernet-push.sh after the fernet bootstrap
performed in deploy.
Change-Id: I824857ddfb1dd026f93994a4ac8db8f80e64072e
Closes-Bug: #1833729
They are used only to obtain keys for the next task.
Change-Id: I2fac22af4710b70e4df8e3a272bcfb6cc8b8532e
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
The Hitachi NAS Platform iSCSI driver was marked as not supported by
Cinder in the Ocata realease[1].
[1] https://review.opendev.org/#/c/444287/
Change-Id: I1a25789374fddaefc57bc59badec06f91ee6a52a
Closes-Bug: #1832821
In some cases, we can mount extra volumes for gnocchi to facilitate
integration.
Change-Id: Ife475ca7d0555562f6e3ef0867835d69d288c8c4
Signed-off-by: ZijianGuo <guozijn@gmail.com>
"Check if policies shall be overwritten" already exists in its
newer form. The removed one had no effect on play.
Change-Id: I48ed6c1c71c4162a3ab28ab2b51dc1e02932dfef
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Actually, 'mongodb.conf' is a yaml format configuration file. Do not use
merge_configs to merge it.
Change-Id: Id3c006df00c1e2d66472c2195781e01c640cab22
Signed-off-by: ZijianGuo <guozijn@gmail.com>
The TSI is recommended for all users. Some of the key benefits are
a reduction in memory requirements and an increase in the maximum
number of time series. For more information see this link:
https://docs.influxdata.com/influxdb/v1.7/concepts/tsi-details/
Change-Id: I4b29eb5a4ae82f6c39059d0b6de41debdfd75508
Since this review[1], Qinling supports WSGI execution.
From a production perspective, Qinling should be deployed
using Apache and mod_wsgi.
"api_worker" option is not needed anymore because processes will
be handle by Apache mod_wsgi.
Qinling Docker image review[2] has ben created.
[1] https://review.opendev.org/661851
[2] https://review.opendev.org/666647
Change-Id: I9aaee4c2932f1e4ea9fe780a64e96a28fa6bccfb
Story: 2005920
Task: 34181
Docker registry being insecure is handled by docker_registry_insecure
which is set to true by default when docker_registry is set.
The removed code had no effect because docker_registry is not changed
anyway for base (pre-upgrade) install.
This change makes config more readable and also prevents a potential
conflict with the zun profile if ever used in upgrade mode.
Change-Id: I9b5ae8c5b534fa6cce9dbaca8af191e2ca79d19f
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
This commit should help guide people migrating to Kolla Monasca
through the murky depths of the migration process. Since Kolla
did not support Monasca in Queens, some of these steps which
could be automated are not.
Change-Id: I79051cca27178c3cf1671f5c603e38baf929c55c
The "environment" variable set in config.yml and handlers/main.yml
has been removed to fix de deployment and the reconfigure.
Change-Id: I912cadb5113d5572235731863825588b2eb12759