Commit [1] introduced a bug into kolla-ansible
where there is incorrect indentation in the haproxy
configuration file. This patch fixes it.
[1] b13fa5a92c
Closes-Bug: #2080034
Change-Id: I3375e303bc358fc79d1fa2e219e6ec1dba7a38ba
Build upon changes in kolla which change strategy of installing projects
in containers when in dev mode. This fixes problems where when package
file manifest changes, the changes were not reflected in to
devmode-enabled container.
It changes the strategy of installing projects in dev mode in containers.
Instead of bind mounting the project's git repository to the venv
of the container, the repository is bind mounted to
/dev-mode/<project_name> from which the it is installed using pip
on every startup of the container using kolla_install_projects script.
Also updates docs to reflect the changes.
Depends-On: https://review.opendev.org/c/openstack/kolla/+/925712
Closes-Bug: #1814515
Singed-off-by: Roman Krček <roman.krcek@tietoevry.com>
Change-Id: If191cd0e3fcf362ee058549a1b6c244d109b6d9a
harden the TLS default config according to the mozilla
"modern" recommendation:
https://ssl-config.mozilla.org/#server=haproxy&version=2.1&config=modern&openssl=1.1.1k&guideline=5.7
if you want to revert to the old settings, set:
kolla_haproxy_ssl_settings: "legacy" in globals.yaml
alternatively you can also set it to "intermediate"
for a middle ground between security and accessibility.
this also adjusts the glance and neutron tls proxy ssl settings
in their dedicated haproxy config templates to use the same mechanism.
also add some haproxy related docs to the TLS guide and cross reference
it from the haproxy-guide.
Closes-Bug: #2060787
Signed-off-by: Sven Kieske <kieske@osism.tech>
Change-Id: I311c374b34f22c78cc5bcf91e5ce3924c62568b6
Refactor that prepares kolla_container_facts
module for introducing more actions that will be moved
from kolla_container module and kolla_container_volume_facts.
This change is based on a discussion about adding a new action
to kolla_container module that retrieves all names of the running
containers. It was agreed that kolla-ansible should follow Ansible's
direction of splitting modules between action modules and facts
modules. Because of this, kolla_container_facts needs to be able
to handle different requests for data about containers or volumes.
Change-Id: Ieaec8f64922e4e5a2199db2d6983518b124cb4aa
Signed-off-by: Ivan Halomi <ivan.halomi@tietoevry.com>
The Kolla project supports building images with
user-defined prefixes. However, Kolla-ansible is unable
to use those images for installation.
This patch fixes that issue.
Closes-Bug: #2073541
Change-Id: Ia8140b289aa76fcd584e0e72686e3786215c5a99
Most roles are not leveraging the jinja filters available.
According to [1] filtering the list of services makes the execution
faster than skipping the tasks.
This patchset also includes some cosmetic changes to genconfig.
Individual services are now also using a jinja filter. This has
no impact on performance, just makes the tasks look cleaner.
Naming of some vars in genconfig was changed to "service" to make
the tasks more uniform as some were previously using
the service name and some were using "service".
Three metrics from the deployment were taken and those were
- overall deployment time [s]
- time spent on the specific role [s]
- CPU usage (measured with perf) [-]
Overall genconfig time went down on avg. from 209s to 195s
Time spent on the loadbalancer role went down on avg. from 27s to 23s
Time spent on the neutron role went down on avg from 102s to 95s
Time spent on the nova-cell role went down on avg. from 54s to 52s
Also the average CPUs utilized reported by perf went down
from 3.31 to 3.15.
For details of how this was measured see the comments in gerrit.
[1] - https://github.com/stackhpc/ansible-scaling/blob/master/doc/skip.md
Change-Id: Ib0f00aadb6c7022de6e8b455ac4b9b8cd6be5b1b
Signed-off-by: Roman Krček <roman.krcek@tietoevry.com>
Previously Kolla Ansible hard-coded Neutron physical networks starting
at physnet1 up to physnetN, matching the number of interfaces in
neutron_external_interface and bridges in neutron_bridge_name.
Sometimes we may want to customise the physical network names used. This
may be to allow for not all hosts having access to all physical
networks, or to use more descriptive names.
For example, in an environment with a separate physical network for
Ironic provisioning, controllers might have access to two physical
networks, while compute nodes have access to one.
This change adds a neutron_physical_networks variable, making it
possible to customise the Neutron physical network names used for the
OVS, OVN, Linux bridge and OVS DPDK plugins. The default behaviour is
unchanged.
Change-Id: Ib5b8ea727014964919c6b3bd2352bac4a4ac1787
After Neutron policy changes - Octavia jobs started
to fail on cascade LB deletion due to Neutron user
not having service role.
Closes-Bug: #2065337
Change-Id: I616bf3a3dbb4d963665b1621a9e5e9d417b13942
This new role will handle setting sysctl values.
It also handles cases when IPv6 setting is changed, but IPv6 is
not enabled on the system by skipping those settings.
This is an augmentation of previous patch:
Icccfc1c509179c3cfd59650b7917a637f9af9646
Related-bug: #1906306
Change-Id: I5d6cda3307b3d2f27c1b2995f28772523b203fe7
Signed-off-by: Roman Krček <roman.krcek@tietoevry.com>
This way the playbooks won't try to set ipv6 systemctl options
unless ipv6 is available on the system.
Closes-bug: #1906306
Change-Id: Icccfc1c509179c3cfd59650b7917a637f9af9646
Changes name of ansible module kolla_docker to
kolla_container.
Change-Id: I13c676ed0378aa721a21a1300f6054658ad12bc7
Signed-off-by: Martin Hiner <m.hiner@partner.samsung.com>
docker_restart_policy: no causes systemd units to not get created
and we use it in CI to disable restarts on services.
Introducing oneshot policy to not create systemd unit for oneshot
containers (those that are running bootstrap tasks, like db
bootstrap and don't need a systemd unit), but still create systemd
units for long lived containers but with Restart=No.
Change-Id: I9e0d656f19143ec2fcad7d6d345b2c9387551604
When Neutron QoS is enabled, the QoS extension needs to be defined
in the sriov_agent.ini file.
Closes-Bug: #2041863
Change-Id: Id0de181df06a9e382a1483b32c12a8b5da1b71a9
Signed-off-by: German Espinoza <gespinoza@whitestack.com>
Adds the needed changes and configurations in
order to use the neutron plugin, tap-as-a-service,
to create port mirrors using `openstack tap` commands.
Implements: configure-taas-plugin
Depends-On: https://review.opendev.org/c/openstack/kolla/+/885151
Change-Id: Ia09e1f8b423d43c0466fe2d6605ce383fd813544
Signed-off-by: Juan Pablo Suazo <jsuazo@whitestack.com>
The Neutron L3 agent stores state at state_path (/var/lib/neutron by
default) and it is expected that these files persist across restarts.
This change updates the Neutron state_path value to
/var/lib/neutron/kolla (which is where the neutron_metadata_socket
volume is mounted) so that these state files are stored there.
Change-Id: I739aaf9e2d0b2b2e7f7b8f60ef8c2111d6873cef
Signed-off-by: Adam Oswick <adam@adamoswick.co.uk>
Closes-Bug: #2009884
It will check the setting of neutron_plugin_agent,
if it is set to "ovn" or "openvswitch", and run
container and volume checks to make sure the other
agent was not already deployed.
Change-Id: Ie00572f3ff9d3500abd5519bd472e2134c318886
It is useful when external network's MTU is lower then internal
geneve networks.
Host kernel needs to be in version >= 5.2 for this option to work.
All Kolla supported host operating systems have higher kernel version.
Change-Id: Id64e99b07e2bb5e6c97b784f4ffedafc7e7de188
Use case: exposing single external https frontend and
load balancing services using FQDNs.
Support different ports for internal and external endpoints.
Introduced kolla_url filter to normalize urls like:
- https://magnum.external:443/v1
- http://magnum.external:80/v1
Change-Id: I9fb03fe1cebce5c7198d523e015280c69f139cd0
Co-Authored-By: Jakub Darmach <jakub@stackhpc.com>
neutron_tls_proxy and glance_tls_proxy are using haproxy container
image. Pin them to haproxy_tag directly.
Change-Id: I73142db48ebe6641520d21b560f16de892e07c34
This change serialises the neutron l3 agent restart process and adds a
user configurable delay between restarts. This can prevent connectivity
loss due to all agents being restarted at the same time.
Routers increase the recovery time, making this issue more prevalent.
Change-Id: I3be0ebfa12965e6ae32d1b5f13f8fd23c3f52b8c
A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
When running in check mode, some prechecks previously failed because
they use the command module which is silently not run in check mode.
Other prechecks were not running correctly in check mode due to e.g.
looking for a string in empty command output or not querying which
containers are running.
This change fixes these issues.
Closes-Bug: #2002657
Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
assert will also fail when we're not meeting the conditions, makes
clear what we're actually testing, and isn't listed as a skipped task
when the condition is ok.
Change-Id: I3e396f1c605d5d2644e757bbb3d954efe537b65e
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c