No need to touch the sudoers.d file each time
Creation and mode setting is handled by lineinfile itself
Change-Id: Ia36e21b04d3a08fab3c748f6298f142c1d73ee6d
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
... or "what I wish existed when I first became PTL"
Some general improvements to the contributor guide, plus new sections
for PTL duties and release management.
Change-Id: If2f3b7c18de2e6c8d9bac131a16c28c2eeb348f2
When bootstrapping, Heat was not setting a region explicitly, so it
could default to a region other than the one being deployed.
Change-Id: I0a0596a020fbff91ccc5b9f44f271eab220c88cd
The Nova aggregate was always defaulting to some region (usually first
in the Keystone endpoint list) when registering the Nova aggregate for
Blazar. Add in a region override to ensure we are always writing to the
region being deployed.
Change-Id: I3f921ac51acab1b1020a459c07c755af7023e026
When ansible goes in to a loop, by default it prints all the keys for
the item it is looping over. Some roles, when setting up the databases,
iterate over an object that includes the database password.
Override the loop label to hide everything but the database name.
Change-Id: I336a81a5ecd824ace7d40e9a35942a1c853554cd
- Remove trusted_cidrs that has just been removed from
Qinling code.
- Remove use_api_certificate because it's true by default
- Improve list syntax
- Add etcd section
Change-Id: I0426a9d61fbeaa23a1affbc7e981a78283e88263
Add CI jobs for testing an upgrade of a multinode system with Ceph
enabled. As for the existing upgrade job, we upgrade from the previous
release to the current release.
Change-Id: I931772ca4c63757769467a57c80dc0726a11167a
Depends-On: https://review.opendev.org/658163
Qinling is an OpenStack project to provide "Function as a Service".
This project aims to provide a platform to support serverless functions.
Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
Implements: blueprint ansible-qinling-support
Story: 2005760
Task: 33468
Several services inherited [service_credentials] config sections which
they don't use in their code.
Change-Id: Iccf4358e85fb3d7ed25bc1762ff532b2c32bea4a
ARA 1.0 will be released in the near future and isn't backwards
compatible. Pin it so it doesn't break things unexpectedly.
ARA ships simple setup modules to help figure out the paths
to where things are located.
These are backwards compatible from ARA 1.0 to 0.x.
Change-Id: I3fe3f4082279c2fd9a629605619a97aa5f5b0b73
This file can be modified to adjust polling intervals or other configurations.
We can add a custom 'pipeline.yaml' file to override it.
Change-Id: I325523edc4f7e37db55a2e21fe52e76138e6d114
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Stop showing the task has having made changes and silence warning about
not using the yum module (which we could use for the check, but not as
easily).
Change-Id: I9e3608b5db521930409a29981767f468ea234679
* event_definitions.yaml:
This file provides a standard set of events and corresponding traits
that may be of interest.
* event_pipeline.yaml:
This file can be modified to adjust which notifications to capture and
where to publish the events.
Change-Id: I9c1698e07b65102af9b3ee448ad07f8fa6428b74
Signed-off-by: ZijianGuo <guozijn@gmail.com>
The etc_examples and inventory should be copied from the virtual
environment rather than the system.
Change-Id: I3ac1e057971b7481a0bce2a15351031e51bf97d6
Closes-Bug: #1829435
backport: stein, rocky
During startup of nova-compute, we see the following error message:
Error gathering result from cell 00000000-0000-0000-0000-000000000000:
DBNotAllowed: nova-compute
This issue was observed in devstack [1], and fixed [2] by removing
database configuration from the compute service.
This change takes the same approach, removing DB config from nova.conf
in the nova-compute* containers.
[1] https://bugs.launchpad.net/devstack/+bug/1812398
[2] 8253787137
Change-Id: I18c99ff4213ce456868e64eab63a4257910b9b8e
Closes-Bug: #1829705
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.
When we over rotate keys, we get logs like this:
This is not a recognized Fernet token <token> TokenNotFound
Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.
With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:
ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh
Currently with three controllers we have this keystone config:
[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)
[fernet_tokens]
max_active_keys = 4
Currently, kolla-ansible configures key rotation according to the following:
rotation_interval = token_expiration / num_hosts
This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.
Keystone docs state:
max_active_keys =
((token_expiration + allow_expired_window) / rotation_interval) + 2
For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.
This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.
This change also fixes the fernet cron job generator, which was broken
in the following cases:
* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, resulted in no jobs
It should now be possible to request any interval up to a week divided
by the number of hosts.
Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
Before making changes to this script, document its behaviour with a unit
test.
There are two major issues:
* requesting an interval of more than 1 day results in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, results in no jobs
Change-Id: I655da1102dfb4ca12437b7db0b79c9a61568f79e
Related-Bug: #1809469
When integrating 3rd party component into openstack with kolla-ansible,
maybe have to mount some extra volumes to container.
Change-Id: I69108209320edad4c4ffa37dabadff62d7340939
Implements: blueprint support-extra-volumes