Until now, it's loaded from within the container, this doesn't work
with SELinux separation.
Change-Id: Ia2cd08b9b7950ebca4d75938ae4329641c2d6f7c
Depends-on: Ic9076a0a1a8e1360495dcf0eb766118ec63dc362
Related-Bug: 1794550
Java options like heap size configuration needs
tweaking for large scale deployments. Allow
customizing those values from TripleO.
puppet-opendaylight will configure these values
in ODL. Corresponding puppet-opendaylight patch is
https://git.opendaylight.org/gerrit/#/c/68491
Change-Id: I99e08314dedfcc71a776423ac3c6c282237cc0c2
Closes-Bug: #1794073
Since we moved to containerized UC, TLS Everywhere deployments are broken.
Namely we miss two things:
A. The NAT iptables rule for the nova metadata service to be reachable
B. The setting 'service_metadata_proxy=false' needs to be set for nova
metadata otherwise the curl calls to setup ipa will fail with the
following:
[root@overcloud-controller-0 log]# curl http://169.254.169.254/openstack/2016-10-06
<html>
<head>
<title>400 Bad Request</title>
</head>
<body>
<h1>400 Bad Request</h1>
X-Instance-ID header is missing from request.<br /><br />
</body>
</html>
A. Is fixed by adding a conditional iptables rule that is only triggered
when deploying an undercloud (where we set MetadataNATRule to true)
B. Is fixed by setting NeutronMetadataProxySharedSecret to '' on the
undercloud and then setting the corresponding hiera keys only when
the parameter != ''. We tried alternative simpler approaches like
setting NeutronMetadataProxySharedSecret to null but that will break
heat as the parameter is required and setting it to null breaks heat
validation (we also tried to make the parameter optional with a
default: '', but that broke as well)
While we're at it we also remove the neutron metadata service from the
undercloud as it is not needed.
Tested by deploying an undercloud with this change and observing:
A.
Chain PREROUTING (policy ACCEPT 106 packets, 6698 bytes)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- br-ctlplane * 0.0.0.0/0 169.254.169.254 multiport dports 80 state NEW /* 999 undercloud nat ipv4 */ redir ports 8775
B.
grep -ir ^service_metadata_proxy /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
service_metadata_proxy=False
Also a deployment of a TLS overcloud was successful.
Change-Id: Id48df6db012fb433f9a0e618d0269196f4cfc2c6
Co-Authored-By: Martin Schuppert <mschuppe@redhat.com>
Closes-Bug: #1795722
Docker doesn't complain when a directory doesn't exist
in a bind mount, while Podman does complain. Ensure the
directory is present in the mistral-executor container
host prep tasks.
Change-Id: I32993c6dfbd561c16ef1fdce508bf899aff1d940
Fixes-Bug: #1796188
We were using a deprecated interfce to set this value. This uses the
correct one.
Closes-Bug: #1793665
Change-Id: Ib7717911aba3267f855ac6682b0144bfe92034fb
So far the tasks for external update/upgrade were not using the step
mechanism as other tasks, we had a single step. As external
deploy/update/upgrade tasks are being used for more things nowadays,
it's likely that we'll need to go towards a similar model like we have
for deploy/update/upgrade tasks -- proper usage of steps.
For now we have just 2:
* Step 0 for setting global facts, and performing validations.
* Step 1 for actual update/upgrade tasks. (There's an upcoming change
to run online data migrations in step 1).
Change-Id: I1933bd0eedab71caab56c0e5d93ba7927fb7c20f
Partial-Bug: #1793332
Merge openshift/global_defaults.yml into openshift/global_vars.yml file
since they serve the exact same purpose.
Also remove duplicated variables that were set in inventory file for
glusterfs nodes.
Change-Id: Ic0fb84fb7c711d4706b75885e69cbd052cd56f42
GlusterFS is the recommended storage solution for OpenShift. Mark the
deployed GlusterFS cluster as the default storage class when deploying
with CNS.
This is possible to change it back to not being the default storage
class by setting 'openshift_storage_glusterfs_storageclass_default:
false' in the OpenShiftGlusterNodeVars heat parameter.
Change-Id: I6810709263e3cda56fa3aa70797bcc5ca1b28671
Removes conflict on OpenShiftGlobalVariables param that was overwritten
by the openshift-cns.yaml environment file. The default options for CNS
as now moved into the extraconfig/services/openshift-cns.yaml template
and can be overwritten by setting the OpenShiftGlusterNodeVars heat
parameter.
Change-Id: I43052662e913a02945f22e9f541a45ce2d9d828c
The openshift_hostname variable added to the nodes was causing the
openshift-ansible to check for pods with names that didn't match the
real name of the pods, i.e. names composed of IP addresses vs
hostnames.
Change-Id: I794558bd6048e68e03540c10191f44aaa9fdb707
Updates/upgrades workflow must not run during `upgrade prepare` or
`upgrade run`, but during `upgrade run` we need to have the images
available. So the intention is to run `external-upgrade run --tags
container_image_prepare` between `upgrade prepare` and `upgrade run`.
The situation is analogical for `update` and `external-update`
commands.
Change-Id: I49de9a41c62204ab7cd835fec6dab8d59b054948
Closes-Bug: #1795881
These tasks had an empty name field, which breaks ansible's
--start-at-task functionality with a traceback, as it's not valid to
have unnamed tasks.
Change-Id: I2386da62a87bfc290070fce13c2d35290565478a
Adds initial check mode support for the paunch container startup
configuration and kolla config files. This cleans up the formatting of
the generated files so that the diff shown duing check mode with --diff
is useful.
We can't actually run paunch during check mode as it doesn't yet have
any support for a dry run mode.
Change-Id: I9add7b9fda50847c111e91735bd55a1ddf32f696
Adds check mode support for docker_puppet_tasks.
Since it's not possible to reliably determine what these tasks do, we
can't actually run them to get an idea of what might be changed. We can
however show the diff of the json file to get an idea of what would be
run.
Change-Id: I19e8bc9eb93d8acc8ee7d737770f9cc7e63f7a27
Adds check mode support for docker_puppet. The updated json file is
written to /var/lib/docker-puppet/check-mode/docker-puppet.json
during check mode and then diffed with the existing version at
/var/lib/docker-puppet/docker-puppet.json.
When docker-puppet.py is run during check mode, the updated json file
under the check-mode directory is passed to the command. All generated
config files are then written under /var/lib/config-data/check-mode,
which is then recursively diffed with the existing config under just
/var/lib/config-data to report on all changed config files.
Change-Id: I5c831e9546f8b6edaf3b0fda6c9fbef86c825a4c
not needed any more as we fallback to copytruncate
This reverts commit 2b1afc0483f849046b404099944653015fc11a04.
Change-Id: I8c2ad6329b5c4226edae9ea80baf3bca8dd06e65
The CLI commands running Ansible can crash if we send too much
single-line log output their way. This was happening on upgrades, when
we run Ansible with verbosity level 1.
The fix is twofold:
* If ceph-ansible finishes successfully, we don't print the
ceph-ansible output into the main log.
* If ceph-ansible fails, we do print the output, but we print it
line-by-line, which should give us much better readability than
before, and we shouldn't break the limits of the Mistral-Zaqar-CLI
message passing.
Change-Id: I6e0fc36749e74fce25f414c2547e49e2a20437ab
Closes-Bug: #1795689