This change enables the use of Docker healthchecks for
keystone-fernet container. It checks if "key 0" has
right permissions, and if rsync is able to distribute
keys to other keystones.
Implements: blueprint container-health-check
Change-Id: I17bea723d4109e869cd05d211f6f8e4653f46e17
In services which use the Apache HTTP server to service HTTP requests,
there exists a TimeOut directive [1] which defaults to 60 seconds. APIs
which come under heavy load, such as Cinder, can sometimes exceed this
which results in a HTTP 504 Gateway timeout, or similar. However, the
request can still be serviced without error. For example, if Nova calls
the Cinder API to detach a volume, and this operation takes longer
than the shortest of the two timeouts, Nova will emit a stack trace
with a 504 Gateway timeout. At some time later, the request to detach
the volume will succeed. The Nova and Cinder DBs then become
out-of-sync with each other, and frequently DB surgery is required.
Although strictly this category of bugs should be fixed in OpenStack
services, it is not realistic to expect this to happen in the short
term. Therefore, this change makes it easier to set the Apache HTTP
timeout via a new variable.
An example of a related bug is here:
https://bugs.launchpad.net/nova/+bug/1888665
Whilst this timeout can currently be set by overriding the WSGI
config for individual services, this change makes it much easier.
Change-Id: Ie452516655cbd40d63bdad3635fd66693e40ce34
Closes-Bug: #1917648
This pull request adds support for the OpenID Connect authentication
flow in Keystone and enables both ID and access token authentication
flows. The ID token configuration is designed to allow users to
authenticate via Horizon using an identity federation; whereas the
Access token is used to allow users to authenticate in the OpenStack CLI
using a federated user.
Without this PR, if one wants to configure OpenStack to use identity
federation, he/she needs to do a lot of configurations in the keystone,
Horizon, and register quite a good number of different parameters using
the CLI such as mappings, identity providers, federated protocols, and
so on. Therefore, with this PR, we propose a method for operators to
introduce/present the IdP's metadata to Kolla-ansible, and based on the
presented metadata, Kolla-ansible takes care of all of the
configurations to prepare OpenStack to work in a federated environment.
Implements: blueprint add-openid-support
Co-Authored-By: Jason Anderson <jasonanderson@uchicago.edu>
Change-Id: I0203a3470d7f8f2a54d5e126d947f540d93b8210
During a deploy, if keystone Fernet key rotation happens before the
keystone container starts, the rotation may fail with 'permission
denied'. This happens because config.json for Keystone sets the
permissions for /etc/keystone/fernet-keys.
This change fixes the issue by also setting the permissions for
/etc/keystone/fernet-keys in config.json for keystone-fernet and
keystone-ssh.
Change-Id: I561e4171d14dcaad8a2a9a36ccab84a670daa904
Closes-Bug: #1888512
Currently we check the age of the primary Fernet key on Keystone
startup, and fail if it is older than the rotation interval. While this
may seem sensible, there are various reasons why the key may be older
than this:
* if the rotation interval is not a factor of the number of seconds in a
week, the rotation schedule will be lumpy, with the last rotation
being up to twice the nominal rotation interval
* if a keystone host is unavailable at its scheduled rotation time,
rotation will not happen. This may happen multiple times
We could do several things to avoid this issue:
1. remove the check on the age of the key
2. multiply the rotation interval by some factor to determine the
allowed key age
This change goes for the more simple option 1. It also cleans up some
terminology in the keystone-startup.sh script.
Closes-Bug: #1895723
Change-Id: I2c35f59ae9449cb1646e402e0a9f28ad61f918a8
keystone-startup.sh is using fernet_token_expiry instead of
fernet_key_rotation_interval - which effects in restart loop of keystone
containers - when restarted after 2-3 days.
Closes-Bug: #1895723
Change-Id: Ifff77af3d25d9dc659fff34f2ae3c6f2670df0f4
When the internal VIP is moved in the event of a failure of the active
controller, OpenStack services can become unresponsive as they try to
talk with MariaDB using connections from the SQLAlchemy pool.
It has been argued that OpenStack doesn't really need to use connection
pooling with MariaDB [1]. This commit reduces the use of connection
pooling via two configuration options:
- max_pool_size is set to 1 to allow only a single connection in the
pool (it is not possible to disable connection pooling entirely via
oslo.db, and max_pool_size = 0 means unlimited pool size)
- lower connection_recycle_time from the default of one hour to 10
seconds, which means the single connection in the pool will be
recreated regularly
These settings have shown better reactivity of the system in the event
of a failover.
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
Closes-Bug: #1896635
This change adds support for encryption of communication between
OpenStack services and RabbitMQ. Server certificates are supported, but
currently client certificates are not.
The kolla-ansible certificates command has been updated to support
generating certificates for RabbitMQ for development and testing.
RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
The Zuul 'tls_enabled' variable is true.
Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
Implements: blueprint message-queue-ssl-support
It was found to be useless in [1].
It is one of distro_python_version usages.
Note Freezer and Horizon still use python_path (and hence
distro_python_version) for different purposes.
[1] https://review.opendev.org/675822
Change-Id: I6d6d9fdf4c28cb2b686d548955108c994b685bb1
Partially-Implements: blueprint drop-distro-python-version
The goal for this push request is to normalize the construction and use
of internal, external, and admin URLs. While extending Kolla-ansible
to enable a more flexible method to manage external URLs, we noticed
that the same URL was constructed multiple times in different parts
of the code. This can make it difficult for people that want to work
with these URLs and create inconsistencies in a large code base with
time. Therefore, we are proposing here the use of
"single Kolla-ansible variable" per endpoint URL, which facilitates
for people that are interested in overriding/extending these URLs.
As an example, we extended Kolla-ansible to facilitate the "override"
of public (external) URLs with the following standard
"<component/serviceName>.<companyBaseUrl>".
Therefore, the "NAT/redirect" in the SSL termination system (HAproxy,
HTTPD or some other) is done via the service name, and not by the port.
This allows operators to easily and automatically create more friendly
URL names. To develop this feature, we first applied this patch that
we are sending now to the community. We did that to reduce the surface
of changes in Kolla-ansible.
Another example is the integration of Kolla-ansible and Consul, which
we also implemented internally, and also requires URLs changes.
Therefore, this PR is essential to reduce code duplicity, and to
facility users/developers to work/customize the services URLs.
Change-Id: I73d483e01476e779a5155b2e18dd5ea25f514e93
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
This patch introduces a global keep alive timeout value for services
that leverage httpd + wsgi to handle http/https requests. The default
value is one minute.
Change-Id: Icf7cb0baf86b428a60a7e9bbed642999711865cd
Partially-Implements: blueprint add-ssl-internal-network
Keystone was not loading the correct mod_ssl library in centos 8
deployment.
Change-Id: I604d675ba7ad28922f360fdc729746f99c1507b4
Partially-Implements: blueprint add-ssl-internal-network
This patch introduces an optional backend encryption for Keystone
service. When used in conjunction with enabling TLS for service API
endpoints, network communcation will be encrypted end to end, from
client through HAProxy to the Keystone service.
Change-Id: I6351147ddaff8b2ae629179a9bc3bae2ebac9519
Partially-Implements: blueprint add-ssl-internal-network
There are cases when a multinode deployment ends up in unusable
keystone public wsgi on some nodes.
The root cause is that keystone public wsgi doesn't find fernet
keys on startup - and then persists on sending 500 errors to any
requests - due to a race condition between
fernet_setup/fernet-push.sh and keystone startup.
Depends-On: https://review.opendev.org/703742/
Change-Id: I63709c2e3f6a893db82a05640da78f492bf8440f
Closes-Bug: #1846789
Currently the WSGI configuration for binary images uses python2.7
site-packages in some places. This change uses distro_python_version to
select the correct python path.
Change-Id: Id5f3f0ede106498b9264942fa0399d7c7862c122
Partially-Implements: blueprint python-3
Since Debian and Ubuntu are already on Python3 only and don't have unversioned
Python binaries (no /usr/bin/python) - we need to call the fetch-fernet-tokens
script using distro_python_version
Backport: train
Related-Bug: #1859047
Change-Id: I42378af9b25f14079fc57b4068ab25d5d4877362
Currently we don't put global Apache error logs into /var/log/kolla,
this change adds statements that redirect those logs there.
Adapted the logfile names to catch into openstack wsgi logging fluentd
input config and existing logrotate cron entries.
Change-Id: I21216e688a1993239e3e81411a4e8b6f13e138c2
In source images, keystone-manage is installed to a virtualenv in
/var/lib/kolla/venv. This is not in the PATH for cron jobs, which always
use PATH=/usr/bin:/bin. This results in the following error:
/usr/bin/fernet-rotate.sh: line 3: keystone-manage: command not found
However this error is not typically visible, since cron logs to syslog
and we do not configure fluentd to collect these logs.
This change configures the PATH in the fernet-rotate.sh script for
source images.
Change-Id: Ib49ea586d36ae32d01b9610a48b13798db4a4cd5
Closes-Bug: #1850711
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Both ubuntu source and binary install type support python3 now,
python_path should be updated.
Depends-On: https://review.opendev.org/675581
Partially Implements: blueprint python3-support
Change-Id: I4bf721b44220bde2d25d4d985f5ca411699a5a72
After all of the discussions we had on
"https://review.opendev.org/#/c/670626/2", I studied all projects that
have an "oslo_messaging" section. Afterwards, I applied the same method
that is already used in "oslo_messaging" section in Nova, Cinder, and
others. This guarantees that we have a consistent method to
enable/disable notifications across projects based on components (e.g.
Ceilometer) being enabled or disabled. Here follows the list of
components, and the respective changes I did.
* Aodh:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Congress:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Cinder:
It was already properly configured.
* Octavia:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Heat:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Ceilometer:
Ceilometer publishes some messages in the rabbitMQ. However, the
default driver is "messagingv2", and not ''(empty) as defined in Oslo;
these configurations are defined in ceilometer/publisher/messaging.py.
Therefore, we do not need to do anything for the
"oslo_messaging_notifications" section in Ceilometer
* Tacker:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Neutron:
It was already properly configured.
* Nova
It was already properly configured. However, we found another issue
with its configuration. Kolla-ansible does not configure nova
notifications as it should. If 'searchlight' is not installed (enabled)
the 'notification_format' should be 'unversioned'. The default is
'both'; so nova will send a notification to the queue
versioned_notifications; but that queue has no consumer when
'searchlight' is disabled. In our case, the queue got 511k messages.
The huge amount of "stuck" messages made the Rabbitmq cluster
unstable.
https://bugzilla.redhat.com/show_bug.cgi?id=1478274https://bugs.launchpad.net/ceilometer/+bug/1665449
* Nova_hyperv:
I added the same configurations as in Nova project.
* Vitrage
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Searchlight
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Ironic
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Glance
It was already properly configured.
* Trove
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Blazar
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Sahara
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Watcher
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Barbican
I created a mechanism similar to what we have in Cinder, Nova,
and others. I also added a configuration to 'keystone_notifications'
section. Barbican needs its own queue to capture events from Keystone.
Otherwise, it has an impact on Ceilometer and other systems that are
connected to the "notifications" default queue.
* Keystone
Keystone is the system that triggered this work with the discussions
that followed on https://review.opendev.org/#/c/670626/2. After a long
discussion, we agreed to apply the same approach that we have in Nova,
Cinder and other systems in Keystone. That is what we did. Moreover, we
introduce a new topic "barbican_notifications" when barbican is
enabled. We also removed the "variable" enable_cadf_notifications, as
it is obsolete, and the default in Keystone is CADF.
* Mistral:
It was hardcoded "noop" as the driver. However, that does not seem a
good practice. Instead, I applied the same standard of using the driver
and pushing to "notifications" queue if Ceilometer is enabled.
* Cyborg:
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Murano
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Senlin
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Manila
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Zun
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Designate
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Magnum
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
Closes-Bug: #1838985
Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
When running deploy or reconfigure for Keystone,
ansible/roles/keystone/tasks/deploy.yml calls init_fernet.yml,
which runs /usr/bin/fernet-rotate.sh, which calls keystone-manage
fernet_rotate.
This means that a token can become invalid if the operator runs
deploy or reconfigure too often.
This change splits out fernet-push.sh from the fernet-rotate.sh
script, then calls fernet-push.sh after the fernet bootstrap
performed in deploy.
Change-Id: I824857ddfb1dd026f93994a4ac8db8f80e64072e
Closes-Bug: #1833729
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.
When we over rotate keys, we get logs like this:
This is not a recognized Fernet token <token> TokenNotFound
Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.
With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:
ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh
Currently with three controllers we have this keystone config:
[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)
[fernet_tokens]
max_active_keys = 4
Currently, kolla-ansible configures key rotation according to the following:
rotation_interval = token_expiration / num_hosts
This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.
Keystone docs state:
max_active_keys =
((token_expiration + allow_expired_window) / rotation_interval) + 2
For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.
This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.
This change also fixes the fernet cron job generator, which was broken
in the following cases:
* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, resulted in no jobs
It should now be possible to request any interval up to a week divided
by the number of hosts.
Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
1. The Keystone WSGI scripts don't have a python3- prefix, although they
are using python 3.
2. The Placement WSGI script doesn't have a python3- prefix, although it
is using python 3.
Depends-On: https://review.openstack.org/651327
Change-Id: I805df8f85634edea8322495ca73897d44967cfe6
Closes-Bug: #1823989
This allows keystone service endpoints to use custom hostnames, and adds the
following variables:
* keystone_internal_fqdn
* keystone_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds the following variables:
* keystone_admin_listen_port
* keystone_public_listen_port
These default to keystone_admin_port and keystone_public_port,
respectively, for backward compatibility.
These options allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I50c46c674134f9958ee4357f0f4eed5483af2214
Implements: blueprint service-hostnames
Use <project>_install_type instead of kolla_install_type
to set python_path. For example, general kolla_install_type
is 'binary', but user wants to deploy Horizon from 'source'.
Horizon templates still use python_path=/usr/share/openstack-dashboard,
it is wrong.
Change-Id: Ide6a24e17b1f8ab6506aa5e53f70693706830418
Currently osprofiler only choose elasticsearch,
which is only supported on x86.
On other platform like aarch64 osprofiler can
not be used since no elasticsearch package.
Enable osprofiler by enable_osprofiler: "yes",
which choose elasticsearch by default.
Choose redis by enable_redis: "yes" & osprofiler_backend: "redis"
On platform without elasticsearch support like aarch64
set enable_elasticsearch: "no"
Change-Id: I68fe7a33e11d28684962fc5d0b3d326e90784d78