Build upon changes in kolla which change strategy of installing projects
in containers when in dev mode. This fixes problems where when package
file manifest changes, the changes were not reflected in to
devmode-enabled container.
It changes the strategy of installing projects in dev mode in containers.
Instead of bind mounting the project's git repository to the venv
of the container, the repository is bind mounted to
/dev-mode/<project_name> from which the it is installed using pip
on every startup of the container using kolla_install_projects script.
Also updates docs to reflect the changes.
Depends-On: https://review.opendev.org/c/openstack/kolla/+/925712
Closes-Bug: #1814515
Singed-off-by: Roman Krček <roman.krcek@tietoevry.com>
Change-Id: If191cd0e3fcf362ee058549a1b6c244d109b6d9a
Refactor that prepares kolla_container_facts
module for introducing more actions that will be moved
from kolla_container module and kolla_container_volume_facts.
This change is based on a discussion about adding a new action
to kolla_container module that retrieves all names of the running
containers. It was agreed that kolla-ansible should follow Ansible's
direction of splitting modules between action modules and facts
modules. Because of this, kolla_container_facts needs to be able
to handle different requests for data about containers or volumes.
Change-Id: Ieaec8f64922e4e5a2199db2d6983518b124cb4aa
Signed-off-by: Ivan Halomi <ivan.halomi@tietoevry.com>
The Kolla project supports building images with
user-defined prefixes. However, Kolla-ansible is unable
to use those images for installation.
This patch fixes that issue.
Closes-Bug: #2073541
Change-Id: Ia8140b289aa76fcd584e0e72686e3786215c5a99
Most roles are not leveraging the jinja filters available.
According to [1] filtering the list of services makes the execution
faster than skipping the tasks.
This patchset also includes some cosmetic changes to genconfig.
Individual services are now also using a jinja filter. This has
no impact on performance, just makes the tasks look cleaner.
Naming of some vars in genconfig was changed to "service" to make
the tasks more uniform as some were previously using
the service name and some were using "service".
Three metrics from the deployment were taken and those were
- overall deployment time [s]
- time spent on the specific role [s]
- CPU usage (measured with perf) [-]
Overall genconfig time went down on avg. from 209s to 195s
Time spent on the loadbalancer role went down on avg. from 27s to 23s
Time spent on the neutron role went down on avg from 102s to 95s
Time spent on the nova-cell role went down on avg. from 54s to 52s
Also the average CPUs utilized reported by perf went down
from 3.31 to 3.15.
For details of how this was measured see the comments in gerrit.
[1] - https://github.com/stackhpc/ansible-scaling/blob/master/doc/skip.md
Change-Id: Ib0f00aadb6c7022de6e8b455ac4b9b8cd6be5b1b
Signed-off-by: Roman Krček <roman.krcek@tietoevry.com>
This change automates the prometheus blackbox monitoring configuration
for common endpoints. Custom endpoints can be added to
prometheus_blackbox_exporter_endpoints_custom.
Change-Id: Id6f51a2bebee3ab63b84ca7032aad17c2933838c
Changes name of ansible module kolla_docker to
kolla_container.
Change-Id: I13c676ed0378aa721a21a1300f6054658ad12bc7
Signed-off-by: Martin Hiner <m.hiner@partner.samsung.com>
docker_restart_policy: no causes systemd units to not get created
and we use it in CI to disable restarts on services.
Introducing oneshot policy to not create systemd unit for oneshot
containers (those that are running bootstrap tasks, like db
bootstrap and don't need a systemd unit), but still create systemd
units for long lived containers but with Restart=No.
Change-Id: I9e0d656f19143ec2fcad7d6d345b2c9387551604
Use case: exposing single external https frontend and
load balancing services using FQDNs.
Support different ports for internal and external endpoints.
Introduced kolla_url filter to normalize urls like:
- https://magnum.external:443/v1
- http://magnum.external:80/v1
Change-Id: I9fb03fe1cebce5c7198d523e015280c69f139cd0
Co-Authored-By: Jakub Darmach <jakub@stackhpc.com>
A combination of durable queues and classic queue mirroring can be used
to provide high availability of RabbitMQ. However, these options should
only be used together, otherwise the system will become unstable. Using
the flag ``om_enable_rabbitmq_high_availability`` will either enable
both options at once, or neither of them.
There are some queues that should not be mirrored:
* ``reply`` queues (these have a single consumer and TTL policy)
* ``fanout`` queues (these have a TTL policy)
* ``amq`` queues (these are auto-delete queues, with a single consumer)
An exclusionary pattern is used in the classic mirroring policy. This
pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
When running in check mode, some prechecks previously failed because
they use the command module which is silently not run in check mode.
Other prechecks were not running correctly in check mode due to e.g.
looking for a string in empty command output or not querying which
containers are running.
This change fixes these issues.
Closes-Bug: #2002657
Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option
is set to ``true`` for wsgi applications to allow the RabbitMQ
heartbeats to function. For non-wsgi applications it is set to ``false``
as it may otherwise break the service [1].
[1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes
Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
Regularly, we experience issues in Kolla Ansible deployments because we
use wrong options in OpenStack configuration files. This is because
OpenStack services ignore unknown options. We also need to keep on top
of deprecated options that may be removed in the future. Integrating
oslo-config-validator into Kolla Ansible will greatly help.
Adds a shared role to run oslo-config-validator on each service. Takes
into account that services have multiple containers, and these may also
use multiple config files. Service roles are extended to use this shared
role. Executed with the new command ``kolla-ansible validate-config``.
Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
THis change adds container_engine to module parameters
so when we introduce podman, kolla_toolbox can be used
for both engines.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: Ic2093aa9341a0cb36df8f340cf290d62437504ad
Second part of patchset:
https://review.opendev.org/c/openstack/kolla-ansible/+/799229/
in which was suggested to split patch into smaller ones.
This change adds container_engine variable to kolla_container_facts
module, this prepares module to be used with docker and podman as well
without further changes in roles.
Signed-off-by: Ivan Halomi <i.halomi@partner.samsung.com>
Co-authored-by: Martin Hiner <m.hiner@partner.samsung.com>
Change-Id: I9e8fa30646844ab4a288555f3aafdda345b3a118
This patch adds loadbalancer-config role
which is "wrapper" around haproxy-config
and proxysql-config role which will be added
in follow-up patches.
Change-Id: I64d41507317081e1860a94b9481a85c8d400797d
Render {{ openstack_service_workers }} for workers
of each openstack service is not enough. There are
several services which has to have more workers because
there are more requests sent to them.
This patch is just adding default value for workers for
each service and sets {{ openstack_service_workers }} as
default, so value can be overrided in hostvars per server.
Nothing changed for normal user.
Change-Id: Ifa5863f8ec865bbf8e39c9b2add42c92abe40616
Fixes an issue where access rules failed to validate:
Cannot validate request with restricted access rules. Set
service_type in [keystone_authtoken] to allow access rule validation
I've used the values from the endpoint. This was mostly a straight
forward copy and paste, except:
- versioned endpoints e.g cinderv3 where I stripped the version
- monasca has multiple endpoints associated with a single service. For
this, I concatenated logging and monitoring to be logging-monitoring.
Closes-Bug: #1965111
Change-Id: Ic4b3ab60abad8c3dd96cd4923a67f2a8f9d195d7
Following up on [1].
The 3 variables are only introducing noise after we removed
the reliance on Keystone's admin port.
[1] I5099b08953789b280c915a6b7a22bdd4e3404076
Change-Id: I3f9dab93042799eda9174257e604fd1844684c1c
Role vars have a higher precedence than role defaults. This allows to
import default vars from another role via vars_files without overriding
project_name (see related bug for details).
Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221
Related-Bug: #1951785
The admin interface for endpoints never had any real use, the
functionality was the same as for the public or internal endpoints,
except for Keystone. Even for Keystone with API v3 it would no longer
really be needed, but it is still being required by some libraries that
cannot be changed in order to stay backwards compatible.
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Icf3bf08deab2c445361f0a0124d87ad8b0e4e9d9
We get a nice optimisation by using a filtered loop instead
of task skipping per service with 'when'.
Partially-Implements: blueprint performance-improvements
Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
By default, Ansible injects a variable for every fact, prefixed with
ansible_. This can result in a large number of variables for each host,
which at scale can incur a performance penalty. Ansible provides a
configuration option [0] that can be set to False to prevent this
injection of facts. In this case, facts should be referenced via
ansible_facts.<fact>.
This change updates all references to Ansible facts within Kolla Ansible
from using individual fact variables to using the items in the
ansible_facts dictionary. This allows users to disable fact variable
injection in their Ansible configuration, which may provide some
performance improvement.
This change disables fact variable injection in the ansible
configuration used in CI, to catch any attempts to use the injected
variables.
[0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars
Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1
Partially-Implements: blueprint performance-improvements
This change enables the use of Docker healthchecks for watcher
services.
Implements: blueprint container-health-check
Change-Id: I0774063dd970507e566637138167ed1af9a2874c
This reverts commit 9cae59be51e8d2d798830042a5fd448a4aa5e7dc.
Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues.
Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc
Closes-Bug: #1906288
Main plays are action-redirect-stubs, ideal for import_tasks.
This avoids 'include' penalty and makes logs/ara look nicer.
Fixes haproxy and rabbitmq not to check the host group as well.
Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0
Partially-Implements: blueprint performance-improvements
Config plays do not need to check containers. This avoids skipping
tasks during the genconfig action.
Ironic and Glance rolling upgrades are handled specially.
Swift and Bifrost do not use the handlers at all.
Partially-Implements: blueprint performance-improvements
Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
When the internal VIP is moved in the event of a failure of the active
controller, OpenStack services can become unresponsive as they try to
talk with MariaDB using connections from the SQLAlchemy pool.
It has been argued that OpenStack doesn't really need to use connection
pooling with MariaDB [1]. This commit reduces the use of connection
pooling via two configuration options:
- max_pool_size is set to 1 to allow only a single connection in the
pool (it is not possible to disable connection pooling entirely via
oslo.db, and max_pool_size = 0 means unlimited pool size)
- lower connection_recycle_time from the default of one hour to 10
seconds, which means the single connection in the pool will be
recreated regularly
These settings have shown better reactivity of the system in the event
of a failover.
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
Closes-Bug: #1896635
This change adds support for encryption of communication between
OpenStack services and RabbitMQ. Server certificates are supported, but
currently client certificates are not.
The kolla-ansible certificates command has been updated to support
generating certificates for RabbitMQ for development and testing.
RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
The Zuul 'tls_enabled' variable is true.
Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
Implements: blueprint message-queue-ssl-support
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the register.yml and bootstrap.yml
includes, all of the tasks in the included file use run_once: True.
The run_once flag improves performance at scale drastically, so
importing these tasks unconditionally will have a lower overhead than a
conditional include task. It therefore makes sense to switch to use
import_tasks there.
See [1] for benchmarks of run_once.
[1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/run-once.md
Change-Id: Ic67631ca3ea3fb2081a6f8978e85b1522522d40d
Partially-Implements: blueprint performance-improvements
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. For unconditionally included tasks, switching to
import_tasks provides a clear benefit.
Benchmarking of include vs. import is available at [1].
This change switches from include_tasks to import_tasks where there is
no condition applied to the include.
[1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/include-and-import.md#task-include-and-import
Partially-Implements: blueprint performance-improvements
Change-Id: Ia45af4a198e422773d9f009c7f7b2e32ce9e3b97
Previously we mounted /etc/timezone if the kolla_base_distro is debian
or ubuntu. This would fail prechecks if debian or ubuntu images were
deployed on CentOS. While this is not a supported combination, for
correctness we should fix the condition to reference the host OS rather
than the container OS, since that is where the /etc/timezone file is
located.
Change-Id: Ifc252ae793e6974356fcdca810b373f362d24ba5
Closes-Bug: #1882553
Including tasks has a performance penalty when compared with importing
tasks. If the include has a condition associated with it, then the
overhead of the include may be lower than the overhead of skipping all
imported tasks. In the case of the check-containers.yml include, the
included file only has a single task, so the overhead of skipping this
task will not be greater than the overhead of the task import. It
therefore makes sense to switch to use import_tasks there.
Partially-Implements: blueprint performance-improvements
Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7