If Ansible runs in dry-run mode (ie with the "--check" arg) it will not
create the /var/log/swift directory, and thus a subsequent task to
create a link to it will fail.
Closes-Bug: 1800457
Change-Id: Ia84f979b5f8d07dfc8b6cd162e1491bbea95c5d7
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Iada64874432146ef311682f26af5990469790ed2
This will allow proper access from the containers without any
new SELinux policy
Depends-On: Ie9f5d3b6380caa6824ca940ca48ed0fcf6308608
Change-Id: I284126db5dcf9dc31ee5ee640b2684643ef3a066
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Icc6b51044ccc826f5b629eb1abd3342813ed84c0
It turns out cloud-init creats a /etc/rsyslog.d folder even if rsyslog
isn't installed. So let's switch the check to look for if the service
has been installed instead.
Change-Id: Id9ea7d1e0b37a523541eb0fa5a5f2495c5df9500
Closes-Bug: #1788051
Add block to step_0 for all services
Add block to step_6 for neutron-api.yaml
Add block to step_1 for nova-compute.yaml
Change-Id: Ib4c59302ad5ad64f23419cd69ee9b2a80333924e
If rsyslog is not installed, we shouldn't try to configure the swift
rules for it.
Change-Id: Ib110e3c87000241257f35fd4094b9040ff1d697b
Closes-Bug: #1788051
Problem: RHEL and CentOS8 will deprecate the usage of Yum.
From DNF release note:
DNF is the next upcoming major version of yum, a package
manager for RPM-based Linux distributions.
It roughly maintains CLI compatibility with YUM and defines a strict API for
extensions.
Solution: Use "package" Ansible module instead of "yum".
"package" module is smarter when it comes to detect with package manager
runs on the system. The goal of this patch is to support both yum/dnf
(dnf will be the default in rhel/centos 8) from a single ansible module.
Change-Id: I8e67d6f053e8790fdd0eb52a42035dca3051999e
Recent overcloud images seem to be more lightweight, and the
openstack-swift-*.rpms are no longer installed. However, these rpms
created an entry in /etc/rsyslog.d/openstack-swift.conf, which ensures
that Swift logs are actually written to /var/log/swift/swift.log
This file is now missing, and thus logs mess up /var/log/messages.
Closes-Bug: 1776180
Change-Id: Icf4310558e3cceafbdd67f72432e59730b96a588
To not to redefine variable multiple times in each service we
run check only once and we set fact. To increase readability of
generated playbook we add block per strep in services.
Change-Id: I2399a72709d240f84e3463c5c3b56942462d1e5c
The new master branch should point now to rocky.
So, HOT templates should specify that they might contain features
for rocky release [1]
Also, this submission updates the yaml validation to use only latest
heat_version alias. There are cases in which we will need to set
the version for specific templates i.e. mixed versions, so there
is added a variable to assign specific templates to specific heat_version
aliases, avoiding the introductions of error by bulk replacing the
the old version in new releases.
[1]: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#rocky
Change-Id: Ib17526d9cc453516d99d4659ee5fa51a5aa7fb4b
Instead of using host_prep_tasks (which are part of deployment tasks),
we'll use the upgrade tasks that are now well known and tested in
previous releases, when the we containerized the overcloud.
Depends-On: Id25e6280b4b4f060d5e3f78a50ff83aaca9e6b1a
Change-Id: Ic199c7d431e155e2d37996acd0d7b924d14af2b7
Resolves an issue during scale out where the Swift set_swift_secret
container that sets the required Barbican key ID wasn't executing on a
Heat stack update if the container name and configs all stayed the same.
Closes-bug: 1767395
Change-Id: I683bb9e96eef73a014d5967a5930ef519ac34430
Using host_prep_tasks interface to handle undercloud teardown before we
run the undercloud install.
The reason of not using upgrade_tasks is because the existing tasks were
created for the overcloud upgrade first and there are too much logic
right now so we can easily re-use the bits for the undercloud. In the
future, we'll probably use upgrade_tasks for both the undercloud and
overcloud but right now this is not possible and a simple way to move
forward was to implement these tasks that work fine for the undercloud
containerization case.
Workflow will be:
- Services will be stopped and disabled (except mariadb)
- Neutron DB will be renamed, then mariadb stopped & disabled
- Remove cron jobs
- All packages will be upgraded with yum update.
Change-Id: I36be7f398dcd91e332687c6222b3ccbb9cd74ad2
We need to set fact instead of registering values and we shouldn't
update packages if we don't run any DB migrations.
Change-Id: I2e508b06064f66ae640ae0c8694dbe290ef42846
fast_forward_upgrade_tasks for swift covering Ocata and Pike.
- Service status check
- Stop service when updating from Ocata to Pike
- Update swift packages
bp fast-forward-upgrades
Change-Id: I66879669cead2f7b5b2cd1398f344c063d771628
If we use variables defined in later step in conditional before
checking which step are we on we will fail.
Resolves: rhbz#1535457
Closes-Bug: #1743764
Change-Id: Ic21f6eb5c4101f230fa894cd0829a11e2f0ef39b
Enabling data-at-rest encryption and integration
with barbican to swift proxy
Related-Change-Id: I78c6003f5f599a422193dc47422ee607ce05c715
Related-Change-Id: I1ceda973733acb081967ab04a5fd57eb1609c9a7
Change-Id: I26cf063fe410689530ee507cc2f79e93b5e71732
Signed-off-by: Thiago da Silva <thiago@redhat.com>
This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
As discussed in the bug below, the expirer service is provided
by the swift-proxy package so it needs to be stopped at the
same time as swift-proxy and not with the other swift services
Change-Id: I01518f82cef494682b4359ba7849ba7e37ac39cc
Related-Bug: 1701501
This ensures that the base class is applied, setting the required hash
values in swift.conf properly when deploying a proxy node without the
storage service at the same time.
Closes-Bug: 1732663
Depends-On: I11c044bbc8b9f56f95ace9320cc77303d9a7543e
Change-Id: Id916413c9d74071968d9988b604664fad30282b2
Step config is only required within the puppet_configs section
of docker/services/*. This patch drops the top level 'step_config'
and updates the unit tests accordingly.
Change-Id: I7dc7cfae3ef1965ec95b1d9ef23e7f162418c034
The logging directories are not used by swift, which in turn logs
to syslog. So the unnecessary bind-mounts were removed.
Furthermore, as part of the swift package which is included in the
overcloud image, there is an rsyslog rule which gets the swift logs
and persists them in /var/log/swift. Due to this and to be consistent
with the rest of the containers. A symlink was created to have the
swift logs available from /var/log/containers/swift as well. This makes
the readme indicating the change of place of the logs unnecessary, so
that was removed as well.
Closes-Bug: #1732107
Change-Id: I61610d3cb187235ac316e1c5de0a344be3ebb1e2
This should help operators find the new log files. We do have them
documented, but not everybody reads every word in the docs :)
The readme creation has ignore_errors: true so that if the directory
isn't present at all (e.g. on deployed server environments, which
don't have openstack packages installed), we don't fail the deployment
when we're not able to create the readme.
Change-Id: I6b36db7b7ce8b3e4da566eb7828d0c3b8646a14f
Partial-Bug: #1730957
Docker services are missing the pre-upgrade validation task
in the upgrade_tasks section which verifies if the service
is running before going on with the upgrade.
Change-Id: I16f38d9e1042c5d83455a28882b4a024aac27699
Partial-Bug: #1704389
Adds a UpgradeRemoveUnusedPackages param to use
in the ansible when conditional for the removal
Adds package removal to step2 right after a service
is stopped and disabled on step2. Package updates
happen in step3 so ideally remove before that.
The package removal task has ignore_errors true
so dependencies or other issue removing packages will
not fail the upgrade workflow.
Also adds this to the upgrade environment files
for visibility and defaulting false
Change-Id: Ie4e4a2d41f7752c5a13507a7c15c6f68e203cfca
Related-Bug: 1701501
We were setting them in the Dockerfile's previously. However this
caused the healtcheck commands to always run regardless of which
process we were running in the container. This caused 'unhealthy'
containers at times they were never intended to be checked. This
change makes it so they are explicitly set.
Change-Id: I7bc12d236b3cc7a52d3e6aa706fd04675dad3a9a
The services that docker depends on, have logging_sources and logging_groups;
but those are not set on the docker outputs so they are not used when dockers
are deployed.
Added logging_source & logging_groups as docker optional parameters in
tools/yaml-validate.py
Closes-Bug: #1718110
Change-Id: I8795eaf4bd06051e9b94aa50450dee0d8761e526
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
This change modifies these mounts to be more specific mounts based on
the files which puppet actually modifies.
The result is something a bit more self-documenting, and allows for
trying other techniques for populating /etc other than directly mounting
config-data directories.
Change-Id: Ied1eab99d43afcd34c00af25b7e36e7e55ff88e6
This is needed since it's what writes the service metadata to the nova
server in order to create the kerberos principals. It worked in a base
controller since the keystone template does have this. But if we would
deploy these services on a separate role, it would break. So this output
is needed.
bp tls-via-certmonger-containers
Change-Id: I3ee8c65d356dcd092a3fbf79041e5c69ef23b721
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of swift.
bp tls-via-certmonger-containers
Depends-On: Ib01137cd0d98e6f5a3e49579c080ab18d8905b0d
Change-Id: I9639af8b46b8e865cc1fa7249bf1d8b1b978adfe