No need to touch the sudoers.d file each time
Creation and mode setting is handled by lineinfile itself
Change-Id: Ia36e21b04d3a08fab3c748f6298f142c1d73ee6d
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
When bootstrapping, Heat was not setting a region explicitly, so it
could default to a region other than the one being deployed.
Change-Id: I0a0596a020fbff91ccc5b9f44f271eab220c88cd
The Nova aggregate was always defaulting to some region (usually first
in the Keystone endpoint list) when registering the Nova aggregate for
Blazar. Add in a region override to ensure we are always writing to the
region being deployed.
Change-Id: I3f921ac51acab1b1020a459c07c755af7023e026
When ansible goes in to a loop, by default it prints all the keys for
the item it is looping over. Some roles, when setting up the databases,
iterate over an object that includes the database password.
Override the loop label to hide everything but the database name.
Change-Id: I336a81a5ecd824ace7d40e9a35942a1c853554cd
In a multi-region environment, each region is being deployed separately.
Cell discovery, however, would sometimes fail due to it picking a region
different than the one being deployed. Most likely, an internal endpoint
for region A will not be visible from region B. Furthermore, it is not
very useful to discover hosts on a region you're not modifying.
This changes the check to only run against nova compute services located
in the region being deployed.
Change-Id: I21eb1164c2f67098b81edbd5cc106472663b92cb
Qinling is an OpenStack project to provide "Function as a Service".
This project aims to provide a platform to support serverless functions.
Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
Implements: blueprint ansible-qinling-support
Story: 2005760
Task: 33468
Several services inherited [service_credentials] config sections which
they don't use in their code.
Change-Id: Iccf4358e85fb3d7ed25bc1762ff532b2c32bea4a
This file can be modified to adjust polling intervals or other configurations.
We can add a custom 'pipeline.yaml' file to override it.
Change-Id: I325523edc4f7e37db55a2e21fe52e76138e6d114
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Stop showing the task has having made changes and silence warning about
not using the yum module (which we could use for the check, but not as
easily).
Change-Id: I9e3608b5db521930409a29981767f468ea234679
* event_definitions.yaml:
This file provides a standard set of events and corresponding traits
that may be of interest.
* event_pipeline.yaml:
This file can be modified to adjust which notifications to capture and
where to publish the events.
Change-Id: I9c1698e07b65102af9b3ee448ad07f8fa6428b74
Signed-off-by: ZijianGuo <guozijn@gmail.com>
backport: stein, rocky
During startup of nova-compute, we see the following error message:
Error gathering result from cell 00000000-0000-0000-0000-000000000000:
DBNotAllowed: nova-compute
This issue was observed in devstack [1], and fixed [2] by removing
database configuration from the compute service.
This change takes the same approach, removing DB config from nova.conf
in the nova-compute* containers.
[1] https://bugs.launchpad.net/devstack/+bug/1812398
[2] 8253787137
Change-Id: I18c99ff4213ce456868e64eab63a4257910b9b8e
Closes-Bug: #1829705
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.
When we over rotate keys, we get logs like this:
This is not a recognized Fernet token <token> TokenNotFound
Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.
With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:
ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh
Currently with three controllers we have this keystone config:
[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)
[fernet_tokens]
max_active_keys = 4
Currently, kolla-ansible configures key rotation according to the following:
rotation_interval = token_expiration / num_hosts
This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.
Keystone docs state:
max_active_keys =
((token_expiration + allow_expired_window) / rotation_interval) + 2
For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.
This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.
This change also fixes the fernet cron job generator, which was broken
in the following cases:
* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, resulted in no jobs
It should now be possible to request any interval up to a week divided
by the number of hosts.
Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
When integrating 3rd party component into openstack with kolla-ansible,
maybe have to mount some extra volumes to container.
Change-Id: I69108209320edad4c4ffa37dabadff62d7340939
Implements: blueprint support-extra-volumes
Cloudkitty has a default (built-in the container) metrics.yml file
in the /etc/cloudkitty/metrics.yml files. We would like to be able
to overwrite/customize these metrics configurations via kolla-ansible.
Cloudkitty is able to use a custom metric file via "metrics_conf".
Therefore, we are enabling this configuration via Kolla-ansible.
Change-Id: Id9019298482c040be05f540e71dacfdf0bd77469
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
The flush_handlers clause doesn't honour conditional clauses.
Instead, it prints a warning and runs anyway:
[WARNING]: flush_handlers task does not support when conditional
See: https://github.com/ansible/ansible/pull/41126
TrivialFix
Change-Id: Iaf70c2e932ae6dfb723bdb2ba658acdbfe74ebe2
This fixes a deprecation warning that gets displayed when running
the kibana/post_config 'Get kibana default indexes' task.
HEADERS_ has been deprecated since ansible 2.1 and will be
removed in 2.9.
https://docs.ansible.com/ansible/latest/modules/uri_module.html
TrivialFix
Change-Id: I177113c606119505c6cb69c66a326f7cbdaf2196
Since Ansible 2.5, the use of jinja tests as filters has been
deprecated.
I've run the script provided by the ansible team to 'fix' the
jinja filters to conform to the newer syntax.
This fixes the deprecation warnings.
Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd
Closes-bug: #1827370