When enabling network isolation, openshift-ansible picks the wrong ip
address as the default IP for the services. Set the IP to the ctrlplane
network by default, which works with and without network isolation.
Change-Id: I0deef6c2a71c1f2a34e6efed9586bbaa052b49c9
This is potentially confusing now we added RHELRegistrationActions
since it's unused but mentions DeploymentActions.
Change-Id: Ifb335cb8055528fd9b64081b30e987524169dc95
This can be used in the case where e.g a satellite has been added
after the initial deployment to re-register the nodes with the
satellite, even those nodes that already exist.
Change-Id: I944bc4c65b08de1ca08dd91f55764ebfe141dd9c
The default control plane subnet name is "ctlplane-subnet", so let's
create the right subnet for the containerized undercloud.
Note: the subnet can't be overriden (yet) but for now we rely on the
default.
Change-Id: I15954bced81ef6c3e1a1f4a73bc989f33d08d6f7
We want to configure a TLS url for the underclouds stackrc
when a user specified or generated TLS certificate is used.
This patch updates the existing check so that
the PublicSSLCertificateAutogenerated paremeter is also used
when deciding if the SSL URL should be enabled.
Change-Id: I7561b5de7749ca57f8ac8056b470228e1026eb31
This allows deploying openshift from the packaged openshift-ansible or
from a git checkout more easily, by setting the
OpenShiftAnsiblePlaybook heat environment variable.
Change-Id: I60594faa10dfd817d94038b3938d7de269330e2e
In the PreNetworkConfig, the order of resources sent to os-collect-config
changed after introducing vhost user resource. The current order is
1. HostParametersDeployment
2. DpdkVhostGroupDeployment
3. RebootDeployment and EnableDpdkDeployment
Here the expectation is that RebootDeployment should be completed
before enabling DPDK, but since both are provided at the same time to
os-collect-config, DPDK is enabled first. The reson is RebootDepolyment
is having signal transport as NONE and EnableDpdkDeployment is moved
after reboot because of ovs2.7 change of restart vswitchd, when DPDK is
enabled. This is causing the a failure.
This patch modifies the order as below:
1. HostParametersDeployment and DpdkVhostGroupDeployment
2. RebootDeployment and RebootEnsureDeployment
3. EnableDpdkDeployment
Change-Id: I5db52d5dd833833c989532931baea8fac03f9cb7
We're deploying containerized OpenShift, which means openshift-ansible
deploys also containerized OVS. When not disabled explicitly, the bare
metal OVS service seemed to persist at least partially, and it likely
caused issues with the containerized OVS, where nodes in `kubectl get
nodes` would go from Ready status to NotReady shortly after the
deployment finished.
Change-Id: I8952198be7f78a699cf363af2e10f26714e94850
Closes-Bug: #1741224
Some of the options that had been hard-coded in the openshift-master
template should be configuratble in a per-deployment bases. This patch
moves them out into an environment file instead.
Change-Id: I4b6f6180b11f36b1212b9e887365a99b6ae12017
This will allow arbitrary config of global variables for
openshift-ansible, e.g. customizing SDN params according to:
https://docs.openshift.org/3.6/install_config/configuring_sdn.html
Also remove the setting which was meant to disable OVS service
handlers in openshift-ansible -- that wouldn't solve the problem
fully.
Change-Id: Ib87e5d38797da166826af90659e3d05da3352dcf
Related-Bug: #1741224
This exposes the IpsecVars heat parameter which in turn can set any
variable from the tripleo-ipsec ansible role.
Change-Id: Ie6ef4aa05567c739884c1d402fc59eea80b31506
Till now, the ovs service file and ovs-ctl command files
are patched to allow ovs to run with qemu group. In order
to remove this workarounds, a new group hugetlbfs is created
which will be shared between ovs and qemu. This patch contains
the changes required for applying these changes.
Depends-On: I674cbd45e17906448dd54acfdf7a7059880b7278
Change-Id: Iec6be0b99e84b0c89f791c3c9694fe10f3a1e7db
Packages and repositories for openshift 3.7 have been created already.
I've updated the version we are installing and tested this manually.
Change-Id: Id09242b637ca2a060f068887e10981eecaa59e4a
Make sure nodes have, at least, the region and zone labels to allow for
deployments to schedule infra PODs on them.
Change-Id: If3849a46391cfac7eb5dd556d5b65c831026a95c
The output comes from ansible and is already fully readable as it is.
Also, because the previous task didn't have the 'failed_when: false'
directive, it would never reach the 'print xxx outputs' task in case of
failure, while showing the output twice on success.
It is safe to just delete the task.
Change-Id: I56b44aec0a549e184f46344ea362f655ab80b3b0
The first phase sets up the node-to-node tunnels at step 1; this
ensures that the corosync cluster setup is done over the tunnels
and prevents any timeouts that were happening when the setup was
done after the cluster was up. This has the added value that all
the pacemaker communication is encrypted from the beginning.
The second phase is the VIP tunnel setup, which is in step 3. This
is because we need the VIPs to be setup by pacemaker, and we also
need pacemaker to be up.
Depends-On: Ib9a134648c74e5dfcbd7a8ebd2d67bda87992497
Change-Id: Ic402dc73044e2426b097ed0eaf57a77c5e6eef24
The *-variables.conf file for tuned is hardcoded for the profile
"cpu-partitioning", which makes other profiles fail, that also need
the isolated_cores variable.
Change-Id: Iaeedfe5d7c501453fd2039b81c1603eff6125ebf
This change is to update the memory channels parameter default
value in service yaml instead of environment yaml file.
Change-Id: Ia0a79b5dc3aa060b91d68e0d23cb1fb5b33eb020
Closes-Bug: #1741234
This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
As a preparation for the new contrail microservices current templates are
removed.
Change-Id: Iea61fefe9a147b96cf00a008bbb61a482eb95a75
Closes-Bug: 1741452
By default OpenShift won't allow scheduling on masters. We'll want to
deploy OpenStack pods on the controllers so we need this enabled, and
we'll need this for CI too.
Change-Id: Ia4190a23c04bda52b17eac50e57da891af615ff4
Since the ansible-tripleo-ipsec package is now available and
tripleo-heat-templates relies on it, we no longer need to clone
the tripleo-ipsec repo as part of the ansible tasks.
Change-Id: I513f748abeaee6589829e1d45483db9a7e7791ea
... so we can know how long take resources configuration in Puppet
catalogs, and more easily debug why we have timeouts.
Change-Id: If3fae8837140caae91120e46b4880146ffe22afc
Background:
extraconfig/pre_deploy/rhel-registration interface has been maintained
for some time now but it's missing some features and the code overlaps
with ongoing efforts to convert everything to Ansible.
Plan:
Consume ansible-role-redhat-subscription from TripleO, so all the logics
goes into the Ansible role, and not in TripleO anymore.
The single parameter exposed to TripleO is RhsmVars and any Ansible
parameter can be given to make the role working.
The parameter can be overriden per roles, so we can think at specific
cases were some Director roles would have specific RHSM configs.
Once we have feature parity between what is done and what was here
before, we'll deprecate the old interface.
Testing:
Because RHSM can't be tested on CentOS, this code was manually tested on
RHEL against the public subscription portal. Also, we verified that
generated Ansible playbooks were correct and called the role with the
right parameters.
Documentation:
We'll work on documentation during the following weeks and explain
how to switch from the previous interface to the new one, and also
document new uses requested by our users.
Change-Id: I8610e4f1f8478f2dcbe3afc319981df914ce1780
There are still some templates with the wrong
alias name. This patch updates them with the
correct version.
Change-Id: I43549ac98f3736029d4aaad1ead745caf40f9299
We weren't creating the default flavors for the undercloud. Do it here!
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: Ic0b00ab42422e8d7f1ddd750d993c7919af0823e
A previous (failed/hanging?) yum process blocks 'yum makecache'
and 'yum check-update' operations, which leads to timeout during
minor update.
Change-Id: I461c1c722944813493f53f339054f420d6ddbe15
Related-Bug: #1704131
These services only work with the new Ansible deploy workflow, which
is currently considered experimental because it's yet to be integrated
with UI.
Change-Id: Ia3f6b62118696792c6581f08f1beb5c75742c66f