These were always mounted, which is an anti-pattern. In order to get the
podman deployment to work, these mounts need to be conditional.
Change-Id: I5f649eea4e6c50905a333f231b49e91b8b5bef0d
* We don't use this setup if TLS everywhere is not enabled, so lets set it
up as such. This prevents the HAProxy container managed by pacemaker of
mounting this file.
* Also fix the docker service to exercise the if with proper syntax.
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Change-Id: Id8dff81c5af390446507bcef458a135fc2287186
When upgrading from Rocky to master in CI, we don't seem to have
xinetd present in the overcloud, and attempting to restart it fails
the upgrade. Check if it's running before trying to restart it.
Change-Id: I9f45340cf6caf7811aa03a1b2aa16eec599d4faa
Closes-Bug: #1792527
Fix issue that arise during upgrade:
error while evaluating conditional (redis_pcs_res|bool):
'redis_pcs_res' is undefined
Change-Id: I4298eb1ec2fc0e0c44aa63189cff3962fb06c6bd
This patch enables container health check script execution for containers
ironic_inspector and ironic_inspector_dnsmasq.
Change-Id: I62a50021605e1017e387f76595bd0f5680979900
Depends-On: Ie724b155fa071da9f1baee193cf79e2ecdc2ff30
This patch enables health check execution in sahara_api container.
Change-Id: Ic14d9c5b9a4ad014181e8505fae3d2f656b7b0bd
Depends-On: Ie724b155fa071da9f1baee193cf79e2ecdc2ff30
Create a new parameter in TripleO: ContainerCli.
The default is set to 'docker' for backward compatibility but it allows
to also set to 'podman'.
When podman is selected, the right commands will be run so docker-puppet
can configure the containers when Podman is the selected container
library backend.
It removes the tripleo_logs:/var/log/tripleo/ mount that was used
by tripleo-ui but we shouldn't do that here. We'll create a bind mount
in tripleo-ui container later.
It run puppet with FACTER_hostname only if NET_HOST is disabled.
Change-Id: I240b15663b720d6bd994d5114d43d51fa26d76cc
Co-Authored-by: Martin André <m.andre@redhat.com>
We currently hit this bug: https://github.com/containers/libpod/issues/1412
In order to move forward, let's bind-mount /dev/null into the container
until the bug is fixed. Note, it doesn't hurt docker deployment as we
already mounted /dev.
Related-Bug: #1791167
Change-Id: I0e885c248bb08c04fb9b7efa9e075e692879b450
We always run DB sync in deploy_tasks, ensuring that the database is
up to date. We should follow up with online data migrations
too.
Doing this via docker_config has 2 purposes:
* We can easily ensure this happens in a container with the right
config files mounted.
* We can even apply this via a minor update. This is important because
we'll have to backport this all the way to Pike and apply it there
using Pike containers, before upgrading to Queens containers.
There's an additional issue to consider: In Puppet service variant we
ran the online migrations for release X before upgrading to X+1, but
the proposed Docker variant migrations for X run with upgrade to
X. This means that when switching from non-containerized to
containerized, we'll need to run migrations twice, to correctly switch
between the aforementioned approaches.
Change-Id: I2eb6c7c42d7e7aea4a78a892790e42bc5371f792
Closes-Bug: #1790474
Rerunning the overcloud deploy command with no changes restarts a
truckload of containers (first seen this via
https://bugzilla.redhat.com/show_bug.cgi?id=1612960). So we really have
three separate issues here. Below is the list of all the containers that
may restart needlessly (at least what I have observed in my tests):
A) cron category:
ceilometer_agent_notification cinder_api cinder_api_cron cinder_scheduler
heat_api heat_api_cfn heat_api_cron heat_engine keystone keystone_cron
logrotate_crond nova_api nova_api_cron nova_conductor nova_consoleauth
nova_metadata nova_scheduler nova_vnc_proxy openstack-cinder-volume-docker-0
panko_api
These end up being restarted because in the config volume for the container there is
a cron file and cron files are generated with a timestamp inside:
$ cat /var/lib/config-data/puppet-generated/keystone/var/spool/cron/keystone
...
# HEADER: This file was autogenerated at 2018-08-07 11:44:57 +0000 by puppet.
...
The timestamp is unfortunately hard coded into puppet in both the cron provider and the parsedfile
provider:
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/cron/crontab.rb#L127https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/parsedfile.rb#L104
We fix this by repiping tar into 'tar xO' and grepping away any line
that starts with # HEADER.
B) swift category:
swift_account_auditor swift_account_reaper swift_account_replicator
swift_account_server swift_container_auditor swift_container_replicator
swift_container_server swift_container_updater swift_object_auditor
swift_object_expirer swift_object_replicator swift_object_server
swift_object_updater swift_proxy swift_rsync
So the swift containers restart because when recalculating the md5 over the
/var/lib/config-data/puppet-generated/swift folder we also include:
B.1) /etc/swift/backups/... which is a folder which over time collects backup of the ringfiles
B.2) /etc/swift/*.gz it seems that the *.gz files seem to change over time
We just add a parameter to the tar command to exclude those files as
we do not need to trigger a restart if those files change.
--exclude='*/etc/swift/backups/*' --exclude='*/etc/swift/*.gz'
C) libvirt category:
nova_compute nova_libvirt nova_migration_target nova_virtlogd
This one seems to be due to the fact that the /etc/libvirt/passwd.db file contains a timestamp and
even when we disable a user and passwd.db does not exist, it gets
created:
[root@compute-1 nova_libvirt]# git diff cb2441bb1caf7572ccfd870561dcc29d7819ba04..0c7441f30926b111603ce4d4b60c6000fe49d290 .
passwd.db changes do not need to trigger a restart of the container se
we can safely exclude this file from any md5 calculation.
Part C) was: Co-Authored-By: Martin Schupper <mschuppe@redhat.com>
We only partial-bug this one because we want a cleaner fix where
exceptions to the files being checksummed will be specified in the tht
service files.
Partial-Bug: #1786065
Tested as follows:
./overcloud_deploy.sh
tripleo-ansible-inventory --static-yaml-inventory inv.yaml
ansible -f1 -i inv.yaml -m shell --become -a "docker ps --format=\"{{ '{{' }}.Names{{ '}}' }}: {{ '{{' }}.CreatedAt{{ '}}' }}\" | sort" overcloud > before
./overcloud_deploy.sh
ansible -f1 -i inv.yaml -m shell --become -a "docker ps --format=\"{{ '{{' }}.Names{{ '}}' }}: {{ '{{' }}.CreatedAt{{ '}}' }}\" | sort" overcloud > after
diff -u before after | wc -l
0
Change-Id: I10f5cacd9fee94d804ebcdffd0125676f5a209c4
The neutron dhcp-agent log path is not set properly.
The service is logging at /var/log/containers/neutron/dhcp-agent.log
and the log is set to /var/log/neutron/dhcp-agent.log
Change-Id: Ia22eff1093c25395bc98cacd2f2106a2ac374eb9
When deploying with podman, we need to create directories if they don't
exist before trying to mount them later when containers are starting.
Otherwise, podman fails with this kind of error:
error checking path \"/etc/iscsi\": stat /etc/iscsi: no such file or directory"
Change-Id: I7dbdc7f3646dda99c8014b4c8ca2edd48778b392
This patch passes RpcPort parameter value to container health check
scripts, which are based on verifying if service is connected to RabbitMQ.
Change-Id: If63f136b5173bb9a94572ea5062a188469c2c782
Closes-Bug: #1782369
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Icc6b51044ccc826f5b629eb1abd3342813ed84c0
ml2_conf.ini shoudn't be used in neutron-ovs-agent
Some parameters can be in conflict and overwrite
each other eg firewall_driver. Using openvswitch_agent
is enought to configure correct agent.
Change-Id: I815cb67fd4ea9ad98347d6d6bbcc9bcf01113649
Closes-Bug: 1789549
We don't need to mount /usr/share/neutron, the directory is provided in
openstack-neutron rpm, so we don't need to manage this directory. It
should be in all neutron containers, including the neutron_db_sync.
Change-Id: I6f71ce62b1c5f3de175d7a50ee7229d3047a379a
Apparently not doing this break the bootstrap container in some cases
(for example when TLS everywhere is enabled). This case is not really
supported by Sahara right now, but better fix it in advance.
More details about this change are available in the similar patches
that landed for other components:
- Cinder: https://review.openstack.org/539498
- Manila: https://review.openstack.org/594801
Change-Id: Iab8ad50f4397ee9809f50d1474026d5ff8a6972c
When deploying with tls-everywhere, there are
more connection options necessary for the Overcloud
manila database bootstrap container to connect
to mysql. These connection options are present in
the configuration folder
/var/lib/config-data/manila/etc/my.cnf.d/tripleo.cnf.
Fix the bind-mounts on the manila_api_db_sync
container so it doesn't fail to find this
configuration.
Closes-Bug: #1788337
Change-Id: I44133b0b0c4367214649777680c94dcfa7bddc76
It turns out cloud-init creats a /etc/rsyslog.d folder even if rsyslog
isn't installed. So let's switch the check to look for if the service
has been installed instead.
Change-Id: Id9ea7d1e0b37a523541eb0fa5a5f2495c5df9500
Closes-Bug: #1788051
Add block to step_0 for all services
Add block to step_6 for neutron-api.yaml
Add block to step_1 for nova-compute.yaml
Change-Id: Ib4c59302ad5ad64f23419cd69ee9b2a80333924e