This change fixes the trove failed to discover swift endpoint
by adding service_credentials in guest-agent.conf
Closes-Bug: #2048829
Change-Id: I185484d2a0d0a2d4016df6acf8a6b0a7f934c237
This change fixes the trove guest instance failed to connect to
RabbitMQ by adding quorum queues support to oslo_messaging_rabbit
section in guest-agent.conf.
Closes-Bug: #2048822
Change-Id: I94908f8e20981f20fbe4dc18e2091d3798f8b801
This change fixes the trove guest instance failed to connect to
RabbitMQ by adding durable queues support to oslo_messaging_rabbit
section in guest-agent.conf.
Partial-Bug: #2048822
Change-Id: I8efc3c92e861816385e6cda3b231a950a06bf57d
As per the current release tested runtime, we test
till python 3.11 so updating the same in python
classifier in setup.cfg
Change-Id: I241e77dbf6bb2085a5bf5d54f9e5b0d2af96fbf3
This adds an extra resize operation to core OpenStack tests. This should
be fast since we are only increasing the number of cores of the VM and
could help catch additional errors in CI tests.
Change-Id: Ia61b995dbffcda4f1e6494548df457231cb67bd7
The addition of an instance resize operation [1] to CI testing is
triggering a failure in kolla-ansible-debian-ovn jobs, which are using a
nodeset with multiple nodes:
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: scp -r /var/lib/nova/instances/8ca2c7e8-acae-404c-af7d-6cac38e354b8_resize/disk 192.0.2.2:/var/lib/nova/instances/8ca2c7e8-acae-404c-af7d-6cac38e354b8/disk
Exit code: 255
Stdout: ''
Stderr: "Warning: Permanently added '[192.0.2.2]:8022' (ED25519) to the list of known hosts.\r\nsubsystem request failed on channel 0\r\nscp: Connection closed\r\n"
This is not seen on Ubuntu Jammy, which uses OpenSSH 8.9, while Debian
Bookworm uses OpenSSH 9.2. This is likely related to this change in
OpenSSH 9.0 [2]:
This release switches scp(1) from using the legacy scp/rcp protocol
to using the SFTP protocol by default.
Configure sftp subsystem like on RHEL9 derivatives. Even though it is
not yet required for Ubuntu, we also configure it so we are ready for
the Noble release.
[1] https://review.opendev.org/c/openstack/kolla-ansible/+/904249
[2] https://www.openssh.com/txt/release-9.0
Closes-Bug: #2048700
Change-Id: I9f1129136d7664d5cc3b57ae5f7e8d05c499a2a5
This patch sets URL to glance worker.
If this is set, other glance workers will know how to contact this one
directly if needed. For image import, a single worker stages the image
and other workers need to be able to proxy the import request to the
right one.
With current setup glance image import just not working.
Closes-Bug: #2048525
Change-Id: I4246dc8a80038358cd5b6e44e991b3e2ed72be0e
The prometheus_cadvisor container has high CPU usage. On various
production systems I checked it sits around 13-16% on controllers,
averaged over the prometheus 1m scrape interval. When viewed with top we
can see it is a bit spikey and can jump over 100%.
There are various bugs about this, but I found
https://github.com/google/cadvisor/issues/2523 which suggests reducing
the per-container housekeeping interval. This defaults to 1s, which
provides far greater granularity than we need with the default
prometheus scrape interval of 60s.
Reducing the housekeeping interval to 60s on a production controller
reduced the CPU usage from 13% to 3.5% average. This still seems high,
but is more reasonable.
Change-Id: I89c62a45b1f358aafadcc0317ce882f4609543e7
Closes-Bug: #2048223
Some containers exiting with 143 instead of 0, but
this is still OK. This patch just allows
ExitCode 143 (SIGTERM) as fix. Details in
bugreport.
Services which exited with 143 (SIGTERM):
kolla-cron-container.service
kolla-designate_producer-container.service
kolla-keystone_fernet-container.service
kolla-letsencrypt_lego-container.service
kolla-magnum_api-container.service
kolla-mariadb_clustercheck-container.service
kolla-neutron_l3_agent-container.service
kolla-openvswitch_db-container.service
kolla-openvswitch_vswitchd-container.service
kolla-proxysql-container.service
Partial-Bug: #2048130
Change-Id: Ia8c85d03404cfb368e4013066c67acd2a2f68deb
We previously used ElasticSearch Curator for managing log
retention. Now that we have moved to OpenSearch, we can use
the Index State Management (ISM) plugin which is bundled with
OpenSearch.
This change adds support for automating the configuration of
the ISM plugin via the OpenSearch API. By default, it has
similar behaviour to the previous ElasticSearch Curator
default policy.
Closes-Bug: #2047037
Change-Id: I5c6d938f2bc380f1575ee4f16fe17c6dca37dcba
Removed a comment suggesting we use nova-manage db sync --local_cell
when bootstrapping the nova service, since that suggestion has now been
implemented in Kolla. See [1] for more details.
[1]: https://review.opendev.org/c/openstack/kolla/+/902057
Related-Bug: #2045558
Depends-On: Ic64eb51325b3503a14ebab9b9ff2f4d9caec734a
Change-Id: I591f83c4886f5718e36011982c77c0ece6c4cbd7