neutron_tls_proxy and glance_tls_proxy are using haproxy container
image. Pin them to haproxy_tag directly.
Change-Id: I73142db48ebe6641520d21b560f16de892e07c34
This change serialises the neutron l3 agent restart process and adds a
user configurable delay between restarts. This can prevent connectivity
loss due to all agents being restarted at the same time.
Routers increase the recovery time, making this issue more prevalent.
Change-Id: I3be0ebfa12965e6ae32d1b5f13f8fd23c3f52b8c
A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
When running in check mode, some prechecks previously failed because
they use the command module which is silently not run in check mode.
Other prechecks were not running correctly in check mode due to e.g.
looking for a string in empty command output or not querying which
containers are running.
This change fixes these issues.
Closes-Bug: #2002657
Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
assert will also fail when we're not meeting the conditions, makes
clear what we're actually testing, and isn't listed as a skipped task
when the condition is ok.
Change-Id: I3e396f1c605d5d2644e757bbb3d954efe537b65e
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
Regularly, we experience issues in Kolla Ansible deployments because we
use wrong options in OpenStack configuration files. This is because
OpenStack services ignore unknown options. We also need to keep on top
of deprecated options that may be removed in the future. Integrating
oslo-config-validator into Kolla Ansible will greatly help.
Adds a shared role to run oslo-config-validator on each service. Takes
into account that services have multiple containers, and these may also
use multiple config files. Service roles are extended to use this shared
role. Executed with the new command ``kolla-ansible validate-config``.
Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
Move metadata_workers from neutron.conf to metadata_agent.ini.
Remove unknown option placement/os_region_name: we already have
placement/region_name which is the correct one.
This can be backported to previous releases.
Change-Id: I710b5364244d976020656e1ee68e89f337cb3086
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
THis change adds container_engine to module parameters
so when we introduce podman, kolla_toolbox can be used
for both engines.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: Ic2093aa9341a0cb36df8f340cf290d62437504ad
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
This change adds container_engine variable to kolla_container_facts
module, this prepares module to be used with docker and podman as well
without further changes in roles.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: I9e8fa30646844ab4a288555f3aafdda345b3a118
First part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
This implements kolla_container_engine variable
in command calls of docker,so later on it can be
also used for podman without further change.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Change-Id: Ic30b67daa2e215524096ad1f4385c569e3d41b95
Kolla Ansible stopped setting them as they turned out to be
unnecessary for its operations, yet may have conflicted with
security policies of the hosts. [1] [2]
[1] https://launchpad.net/bugs/1837551
[2] https://launchpad.net/bugs/1945453
Change-Id: Ie8ccd3ab6f22a6f548b1da8d3acd334068dc48f5
This patch adds loadbalancer-config role
which is "wrapper" around haproxy-config
and proxysql-config role which will be added
in follow-up patches.
Change-Id: I64d41507317081e1860a94b9481a85c8d400797d
Render {{ openstack_service_workers }} for workers
of each openstack service is not enough. There are
several services which has to have more workers because
there are more requests sent to them.
This patch is just adding default value for workers for
each service and sets {{ openstack_service_workers }} as
default, so value can be overrided in hostvars per server.
Nothing changed for normal user.
Change-Id: Ifa5863f8ec865bbf8e39c9b2add42c92abe40616
The neutron-bgp-dragent container is also needed when using OVN as
backend plugin.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Idec79a53fad048f45139af3b8c72e85385ac80b6
Fixes an issue where access rules failed to validate:
Cannot validate request with restricted access rules. Set
service_type in [keystone_authtoken] to allow access rule validation
I've used the values from the endpoint. This was mostly a straight
forward copy and paste, except:
- versioned endpoints e.g cinderv3 where I stripped the version
- monasca has multiple endpoints associated with a single service. For
this, I concatenated logging and monitoring to be logging-monitoring.
Closes-Bug: #1965111
Change-Id: Ic4b3ab60abad8c3dd96cd4923a67f2a8f9d195d7
Following up on [1].
The 3 variables are only introducing noise after we removed
the reliance on Keystone's admin port.
[1] I5099b08953789b280c915a6b7a22bdd4e3404076
Change-Id: I3f9dab93042799eda9174257e604fd1844684c1c
This key can be used by users in networking-generic-switch
scenario instead of adding cleartext password in ml2_conf.ini.
Change-Id: I10003e6526a55a97f22678ab81c411e4645c5157
Designate sink is an optional service that consumes notifications,
users should have an option to disable it when they don't use them.
Change-Id: I1d5465d9845aea94cff39ff5158cd8b1dccc4834
In most real world deployments, there will be multiple backend DNS
servers, allow to specify all of them for the pool configuration.
Change-Id: Ic9737d0446a807891b429f080ae1bf048a3c8e4a
While I8bb398e299aa68147004723a18d3a1ec459011e5 stopped setting
the net.ipv4.ip_forward sysctl, this change explicitly removes the
option from the Kolla sysctl config file. In the absence of another
source for this sysctl, it should revert to the default of 0 after the
next reboot.
A deployer looking to more aggressively change the value may set
neutron_l3_agent_host_ipv4_ip_forward to 0. Any deployments still
relying on the previous value may set
neutron_l3_agent_host_ipv4_ip_forward to 1.
Related-Bug: #1945453
Change-Id: I9b39307ad8d6c51e215fe3d3bc56aab998d218ec
NSXP is the OpenStack support for the NSX Policy platform.
This is supported from neutron in the Stein version. This patch
adds Kolla support
This adds a new neutron_plugin_agent type 'vmware_nsxp'. The plugin
does not run any neutron agents.
Change-Id: I9e9d8f07e586bdc143d293e572031368af7f3fca
The default configuration was changed to use the advanced cache pool in
keystonemiddleware 9.3.0 (Xena release) [1].
This reverts commit 5a52d8e4a0c5d4c246deb8851ef893df63ee0847 (except the
release note).
[1] https://review.opendev.org/c/openstack/keystonemiddleware/+/773939
Change-Id: I290d0a81c57c189b6eb62fc3eee3ed19f441671b
This change enables the use of Docker healthchecks for ironic-neutron-agent services.
Change-Id: I80f8319b2cf2e4ae09904a08532cde5ec0385fa3
Implements: blueprint container-health-check
Role vars have a higher precedence than role defaults. This allows to
import default vars from another role via vars_files without overriding
project_name (see related bug for details).
Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221
Related-Bug: #1951785