Openshift-ansible already sets the right firewall rules on the
provisioned nodes, there is no need to set up (some of) the rules by
ourselves.
Add the 'OS::TripleO::Services::TripleoFirewall' to all the OpenShift
roles so that the operator can still set additional rules if desired.
Change-Id: I1e8ca10069c3f1017207abfebb803cb7aa3835a8
At the moment the 'OS::TripleO::Services::Timesync' service is
synonymous to 'OS::TripleO::Services::Ntp'. Let's use the more generic
Timesync service to pick up the new default in the event the value for
'OS::TripleO::Services::Timesync' changes.
This better aligns with the rest of the roles.
Change-Id: I44f706ce7dd1909ffd3805337fc6d9a5ce6de80f
The OpenShift roles should include the OS::TripleO::Services::Rhsm
service for Red Hat Subscription Management so that the provisioned
nodes can register with a Satellite or CDN.
Add the Podman service to OpenShifAllInOne to be more consistent with
other roles.
Change-Id: I08862635c68eddbb0940863c43867ece1b289ee5
With the default setting, the keepalived that we deploy on the master
node collides with the one that is setup on the undercloud. We simply
need to use a different virtual_router_id_base to prevent
virtual_router_id collision.
Change-Id: I92ef081a111f93ddce4ec42400bcb648b7f7def0
While introducing the openshift-node service in 7373adc72e, some code
was moved around and that broke the OpenShift external_deploy_task
playbook in the case of a stack update due to undefined ansible
variable.
Rename the new_masters var into new_master_nodes and introduce the
has_new_nodes boolean var that indicates there is at least one new node
in the deployment.
Related-Bug: 1794824
Change-Id: I2f386b5507836deda0816616dd7add8a0b53dfd3
This allows us to deploy openshift without the need to install
openshift-ansible in the mistral container image or in the undercloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Ied75bfbeed71aca83962e60bfc801a2527f5dfba
Change-Id: I1e28e63c8a3a30dfe1e95924f9b4086fcf9513fb
Previously we were only deploying a master node. This commit adds the
worker and infra service to the deployed node and configures it as an
all-in-one node. In order to do so, we need to disable HAproxy when
deploying in all-in-one as the HAproxy instance Openshift deploys on
the infra node conflicts with the one we normally set up. They both
bind ports 80 and 443.
Also removes the useless ComputeServices parameter that only makes
sense in a multinode environment.
Change-Id: I6c7d1b3f2fa5c7b1d9cf695c9e021a4192e5d23a
Depends-On: Ibc98e699d34dc6ab9ff6dce0d41f275b6403d983
Depends-On: I0aa878db62e28340d019cd92769f477189886571
Previously the path to the openshift-ansible's prerequisites playbook
was hardcoded to
/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml. This
commit introduces the `OpenShiftPrerequisitesPlaybook` heat parameter
to make it configurable.
Also add more explicit description for the other playbook path
parameters and update the default path for OpenShiftUpgradePlaybook
that was broken since the move to openshift-ansible 3.10.
Change-Id: I2260cb8b0cef9650c707d4db917a3281a697912d
Until now, it's loaded from within the container, this doesn't
work with SELinux separation.
Change-Id: I3d63d1df7496d3b8a5883b07e9d40aa21153c086
Related-Bug: 1794550
Currently the ip_vs module is loaded from the keepalived container,
and if it works in a non-selinux separated env, it doesn't work with
podman.
Change-Id: I71e638bedde3836e05cffab53ad80bfd35313a31
Related-Bug: 1794550
Until now, it's loaded from within the container, this doesn't work
with SELinux separation.
Change-Id: Ia2cd08b9b7950ebca4d75938ae4329641c2d6f7c
Depends-on: Ic9076a0a1a8e1360495dcf0eb766118ec63dc362
Related-Bug: 1794550
Since we moved to containerized UC, TLS Everywhere deployments are broken.
Namely we miss two things:
A. The NAT iptables rule for the nova metadata service to be reachable
B. The setting 'service_metadata_proxy=false' needs to be set for nova
metadata otherwise the curl calls to setup ipa will fail with the
following:
[root@overcloud-controller-0 log]# curl http://169.254.169.254/openstack/2016-10-06
<html>
<head>
<title>400 Bad Request</title>
</head>
<body>
<h1>400 Bad Request</h1>
X-Instance-ID header is missing from request.<br /><br />
</body>
</html>
A. Is fixed by adding a conditional iptables rule that is only triggered
when deploying an undercloud (where we set MetadataNATRule to true)
B. Is fixed by setting NeutronMetadataProxySharedSecret to '' on the
undercloud and then setting the corresponding hiera keys only when
the parameter != ''. We tried alternative simpler approaches like
setting NeutronMetadataProxySharedSecret to null but that will break
heat as the parameter is required and setting it to null breaks heat
validation (we also tried to make the parameter optional with a
default: '', but that broke as well)
While we're at it we also remove the neutron metadata service from the
undercloud as it is not needed.
Tested by deploying an undercloud with this change and observing:
A.
Chain PREROUTING (policy ACCEPT 106 packets, 6698 bytes)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- br-ctlplane * 0.0.0.0/0 169.254.169.254 multiport dports 80 state NEW /* 999 undercloud nat ipv4 */ redir ports 8775
B.
grep -ir ^service_metadata_proxy /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
service_metadata_proxy=False
Also a deployment of a TLS overcloud was successful.
Change-Id: Id48df6db012fb433f9a0e618d0269196f4cfc2c6
Co-Authored-By: Martin Schuppert <mschuppe@redhat.com>
Closes-Bug: #1795722
Docker doesn't complain when a directory doesn't exist
in a bind mount, while Podman does complain. Ensure the
directory is present in the mistral-executor container
host prep tasks.
Change-Id: I32993c6dfbd561c16ef1fdce508bf899aff1d940
Fixes-Bug: #1796188
We were using a deprecated interfce to set this value. This uses the
correct one.
Closes-Bug: #1793665
Change-Id: Ib7717911aba3267f855ac6682b0144bfe92034fb