Currently we check the age of the primary Fernet key on Keystone
startup, and fail if it is older than the rotation interval. While this
may seem sensible, there are various reasons why the key may be older
than this:
* if the rotation interval is not a factor of the number of seconds in a
week, the rotation schedule will be lumpy, with the last rotation
being up to twice the nominal rotation interval
* if a keystone host is unavailable at its scheduled rotation time,
rotation will not happen. This may happen multiple times
We could do several things to avoid this issue:
1. remove the check on the age of the key
2. multiply the rotation interval by some factor to determine the
allowed key age
This change goes for the more simple option 1. It also cleans up some
terminology in the keystone-startup.sh script.
Closes-Bug: #1895723
Change-Id: I2c35f59ae9449cb1646e402e0a9f28ad61f918a8
Config plays do not need to check containers. This avoids skipping
tasks during the genconfig action.
Ironic and Glance rolling upgrades are handled specially.
Swift and Bifrost do not use the handlers at all.
Partially-Implements: blueprint performance-improvements
Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
Add TLS support for backend Neutron API Server communication using
HAProxy to perform TLS termination. When used in conjunction with
enabling TLS for service API endpoints, network communication will be
encrypted end to end, from client through HAProxy to the Neutron
service.
Change-Id: Ib333a1f1bd12491df72a9e52d961161210e2d330
Partially-Implements: blueprint add-ssl-internal-network
If iptables is not installed, e.g. in the CentOS 8 cloud image, and
Docker iptables management is enabled, we get the following errors:
Failed to find iptables: exec: \"iptables\": executable file not found
in $PATH failed to start daemon: Error initializing network controller:
error obtaining controller instance: failed to create NAT chain DOCKER:
Iptables not found
This change installs the iptables package Docker iptables management is
enabled.
Change-Id: I3ba5318debccafb28c3cbce8e4e9813c28b086fc
Closes-Bug: #1899060
This fixes the `certificates` command to not include CSRs in
the haproxy bundle.
The regex was wrong.
Change-Id: If25a6d5dd40f507fea4470be01baeeb7c8a790b4
we use octavia user to upload image currently, so it is better to
create a octavia openrc file for user
Implements: blueprint implement-automatic-deploy-of-octavia
Change-Id: Ib53d00fa4a6ee59b8a0b2245f83786a6af0cbf53
Use with_first_found on placement-api-wsgi to allow
overwrite from users and keep consistency with other
roles.
Change-Id: I11c84db6df1bb5be61db5b6b0adf8c160a2bd931
Closes-Bug: #1898766
* ipxe_enabled was removed in Ussuri, now there is a separate ipxe boot
interface.
* iPXE now has its own set of configuration for the bootfile and config
template, and the values previously set when iPXE is enabled are now
the default in ironic. The overrides have been removed, since they
match the iPXE defaults.
Change-Id: I9d9f030ee4be979d0a849b59e5eb991f2d82f6a4
This change enables the use of Docker healthchecks for core OpenStack
services.
Also check-failures.sh has been updated to treat containers with
unhealthy status as failed.
Implements: blueprint container-health-check
Change-Id: I79c6b11511ce8af70f77e2f6a490b59b477fefbb
Keepalived and haproxy cooperate to provide control plane HA in
kolla-ansible deployments.
Certain care should be exerted to avoid prolonged availability
loss during reconfigurations and upgrades.
This patch aims to provide this care.
There is nothing special about keepalived upgrade compared to
reconfig, hence it is simplified to run the same code as for
deploy.
The broken logic of safe upgrade is replaced by common handler
code which's goal is to ensure we down current master only after
we have backups ready.
This change introduces a switch to kolla_docker module that allows
to ignore missing containers (as they are logically stopped).
ignore_missing is the switch's name.
All tests are included.
Change-Id: I22ddec5f7ee4a7d3d502649a158a7e005fe29c48
this patchset has implemented:
- network (lb-mgmt-net)
- security groups and rules (used by amphora and health manager)
- amphora flavor (used by amphora)
- nova keypair (used by amphora at the time of debugging)
Add a octavia_amp_listen_port variable which used by amphora
Add amp_image_owner_id in octavia.conf
Implements: blueprint implement-automatic-deploy-of-octavia
Co-Authored-By: zhangchun <zhangchun@yovole.com>
Depends-On: https://review.opendev.org/652030
Change-Id: I67009d046925cfc02c1e0073c80085c1471975f6
keystone-startup.sh is using fernet_token_expiry instead of
fernet_key_rotation_interval - which effects in restart loop of keystone
containers - when restarted after 2-3 days.
Closes-Bug: #1895723
Change-Id: Ifff77af3d25d9dc659fff34f2ae3c6f2670df0f4
This patch introduces an optional backend encryption for the Ironic API
service. When used in conjunction with enabling TLS for service API
endpoints, network communcation will be encrypted end to end, from
client through HAProxy to the Ironic service.
Change-Id: I9edf7545c174ca8839ceaef877bb09f49ef2b451
Partially-Implements: blueprint add-ssl-internal-network
If the common role is executed against a set of hosts that are not all
in the fluentd group, the run_once tasks that find customisations may be
skipped. This causes a later failure when accessing the registered
variables for those tasks.
This issue was raised on the mailing list:
http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016932.html
This issue only affects the master branch, due to addition of groups
for the common role in I6a4676bf6efeebc61383ec7a406db07c7a868b2a.
This change fixes the issue by always running the find tasks, if fluentd
is enabled.
Change-Id: I559c4b94d18c7f36d43e1d88629ed44668abf859
When the internal VIP is moved in the event of a failure of the active
controller, OpenStack services can become unresponsive as they try to
talk with MariaDB using connections from the SQLAlchemy pool.
It has been argued that OpenStack doesn't really need to use connection
pooling with MariaDB [1]. This commit reduces the use of connection
pooling via two configuration options:
- max_pool_size is set to 1 to allow only a single connection in the
pool (it is not possible to disable connection pooling entirely via
oslo.db, and max_pool_size = 0 means unlimited pool size)
- lower connection_recycle_time from the default of one hour to 10
seconds, which means the single connection in the pool will be
recreated regularly
These settings have shown better reactivity of the system in the event
of a failover.
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
Closes-Bug: #1896635
This allows for more config flexibility - e.g. running multiple
backends with a common frontend.
Note this is a building block for future work on letsencrypt
validator (which should offer backend and share frontend with
any service running off 80/443 - which would be only horizon
in the current default config), as well as any work towards
single port (that is single frontend) and multiple services
anchored at paths of it (which is the new recommended default).
Change-Id: Ie088fcf575e4b5e8775f1f89dd705a275725e26d
Partially-Implements: blueprint letsencrypt-https
This allows for more config flexibility - e.g. running multiple
backends with a common frontend.
It is not possible with the 'listen' approach (which enforces
frontend).
Additionally, it does not really make sense to support two ways
to do the exact same thing as the process is automated and
'listen' is really meant for humans not willing to write separate
sections.
Hence this deprecates 'listen' variant.
At the moment both templates work exactly the same.
The real flexibility comes in following patches.
Note this is a building block for future work on letsencrypt
validator (which should offer backend and share frontend with
any service running off 80/443 - which would be only horizon
in the current default config), as well as any work towards
single port (that is single frontend) and multiple services
anchored at paths of it (which is the new recommended default).
Change-Id: I2362aaa3e8069fe146d42947b8dddf49376174b5
Partially-Implements: blueprint letsencrypt-https
Currently there is no option to set container_proxy only for one service
(e.g. magnum). This change adds this option.
Change-Id: Ia938ee660ebe8ce84321f721b6292b0b58a06e20