Since Rocky neutron has support to enable routed networks on
existing an network and subnet if certain conditions are met.
The tripleo undercloud does meet these conditions.
This change updates the extraconfig post script that creates
the neutron ctlplane networks. Any non routed network is
updated to a routed network if 'enable_routed_networks' = True
in the configuration.
Closes-Bug: #1790877
Change-Id: Idf2dd4c158d29b147d48153d3626cf403059d660
* https://review.openstack.org/#/c/614540/ converts undercloud
post deploystep to python where property resources:MEMORY_MB
was setted to 1 but earlier it was zero. It fixes the typo.
Change-Id: If26e17cb2079ef994cc0e0506cbf40bf9023808e
With the upgrade to puppet 5, we can no longer use dots in the hieradata
key lookups. This change updates the THT for firewall_rules,
haproxy_endpoints and haproxy_userlists to use the colon notation.
Change-Id: I6f67153e04aed191acb715fe8cfa976ee2e75878
Related-Bug: #1803024
The openshift-master service will fail any time it is used with a
stack update. This is because the openshift_upgrade var is not
defined, but is checked whenever tripleo_stack_action == 'UPDATE'.
This patch adds a check for openshift_upgrade being defined before
checking if it is True.
Closes-Bug: 1794824
Change-Id: I3a598724154a3242b777eefed9304300c45d8c29
Openshift-ansible gives us a friendly error when we do:
"Please run playbooks/openshift-master/scaleup.yml if you need to scale
up both masters and nodes. This playbook is only needed if you are only
adding new nodes and not new masters"
Change-Id: Ibf52b9dbabc9a4f86c11b7de345c3b73e157435c
Closes-Bug: #1802324
For the openshift-node service, the new_node detection was checking for
stdout instead of rc causing it to always tag nodes as new. This commit
fixes it.
Change-Id: I518f386395b515f59e98877274f4a0fce52ec4d5
Closes-Bug: #1802323
For Python 3.0, has_key() isn't supported already, it can be
replaced with in.
Change-Id: I86344ac971b1e0c75abe5fdd2cddab884daedd9e
Closes-Bug: #1797770
For python3 packaging we are looking for /usr/bin/env python to swap out
with the python3 binary in the packaging. Rather than look for
/usr/bin/python, it's easier to use /usr/bin/env python since the other
files have this. Update this script for consistency.
Change-Id: I3606c356a2103ce1e26e78e1192f0713b51e1ca4
Related-Blueprint: python3-support
Configuring Nova (quota, flavors) and Mistral (workbooks,
workflows, etc.) is a lot faster if we do it in python.
Initial undercloud install - 3.5x faster
----------------------------------------
Run deployment UndercloudPostDeployment ---- 130.50s < Shell
Run deployment UndercloudPostDeployment ---- 37.39s < Python
Re-Running undercloud install - 10x faster
------------------------------------------
Run deployment UndercloudPostDeployment ---- 405.01s < Shell
Run deployment UndercloudPostDeployment ---- 39.95s < Python
Change-Id: If7b3ad701e434ed0d606356b9bbab2716d53c5bb
_run_command() returns the output of the command executed.
If the Neutron API is disabled it would return the string
'false' which is in fact True as far as python is concerned.
We also need a depends_on to ensure the link to hiera.yaml
created in extraconfig/post_deploy/undercloud_post.sh is
already in place.
Change-Id: Iec958a92433d3f671862422ac85bc78d7babc01d
This update the variables to reference v3.11 instead of 3.10 and rework
how to compute the oreg_url osa var due to DockerOpenShiftBaseImage
being deprecated.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Ibbd5ff9d3597f5add440b92a27a2f2f669f7bdbe
Depends-On: I764944bda6534f6b799fa0f4fb2e7980c22b1d67
Change-Id: I569f9da7ba9312a726360a3543b920413f445cbe
Currently no default plan is created during a container-based undercloud
deployment.
This patch creates a default deployment plan if both of two conditions
apply:
- The /usr/share/openstack-tripleo-heat-templates directory exists
(We need the templates to create the plan)
- There is no container named `overcloud` in swift yet
(If a deployment is run more than once we don't want to recreate
the default plan)
Closes-Bug: #1798590
Change-Id: Id03631432b1fedd75ee3ddba67bfe7d5d6049a07
According to docs[1] the minimum requirements when deploying Gluster are:
- at least one raw block device to be used for Gluster storage
- minimum of 3 nodes
This change adjusts the defaults that we set in THT to use one disk for
Gluster storage instead of 3.
[1] https://docs.openshift.com/container-platform/3.10/install/prerequisites.html#hardware-glusterfs
Change-Id: Iab068a18ac9bce79824f5b7edb4a2931f5b66638
ia change f2e72352b1376ce719614e9cad4e4c71a3f9c3d8 we did the following
in nova-base.yaml:
- nova::placement::os_region_name: {get_param: KeystoneRegion}
+ nova::placement::region_name: {get_param: KeystoneRegion}
But in the IHA script we looked for the os_region_name config key in the
placemenet ini section of nova.conf, which is now missing.
We need to adapt that script as well. Without this fix we'll error
out like this:
Oct 16 09:08:12 overcloud-novacomputeiha-0 dockerd-current[14673]: File "/var/lib/nova/instanceha/check-run-nova-compute", line 147, in create_nova_connection
Oct 16 09:08:12 overcloud-novacomputeiha-0 dockerd-current[14673]: region_name=options["os_region_name"][0],
Oct 16 09:08:12 overcloud-novacomputeiha-0 dockerd-current[14673]: KeyError: 'os_region_name'
Closes-Bug: #1798560
Change-Id: I8906145955ab6c444efdfa73beca073a62c26e26
- use static import of role since dynamic is not required
- apply variables to the task, not the block
- remove block since it does not appear to be necessary
Change-Id: I9cb0dd38c972ed0e81cbd7228490018d600eaa5c
This prevents hardcoding it in the template and allows to check for the
right service name during new node detection as origin and
openshift-enterprise deployment types use differnt prefixes for the
services.
Change-Id: Id5cf7d6f7888b759eec7c969275fe15779b7b775
This operation is dangerous and should be done by the operator prior to
deploying openshift. We assume the disks are ready for glusterfs
installation.
Change-Id: I4bd05ee8db9ed944edd7942a1a7aef0de0ab07f0
This variable is used in the docker_image_availability check to
determine how to query the registries for image availability. Setting
this variable allows us to enable the docker_image_availability check
in the gate.
Change-Id: Ia1da542d342228bb28ad487371fad8d3ffc62d0b
The `openshift_examples_modify_imagestreams` ansible variable controls
whether openshift-ansible changes the imagestream registry hosts to be
the same as where the openshift images are pulled. In our case this
points to the container image registry on the undercloud by default.
However, due to how this feature was implemented in openshift-ansible
[1], the imagestreams are only modified when the original value is
registry.redhat.com, i.e. when deploying openshift-enterprise,
explaining why this issue remained unnoticed until now.
[1] 95bc2d2e61/roles/openshift_examples/tasks/main.yml (L52-L55)
Change-Id: I4949f53e966872f775833b8d36d96ef72cf13845
Openshift-ansible already sets the right firewall rules on the
provisioned nodes, there is no need to set up (some of) the rules by
ourselves.
Add the 'OS::TripleO::Services::TripleoFirewall' to all the OpenShift
roles so that the operator can still set additional rules if desired.
Change-Id: I1e8ca10069c3f1017207abfebb803cb7aa3835a8
With the default setting, the keepalived that we deploy on the master
node collides with the one that is setup on the undercloud. We simply
need to use a different virtual_router_id_base to prevent
virtual_router_id collision.
Change-Id: I92ef081a111f93ddce4ec42400bcb648b7f7def0
While introducing the openshift-node service in 7373adc72e, some code
was moved around and that broke the OpenShift external_deploy_task
playbook in the case of a stack update due to undefined ansible
variable.
Rename the new_masters var into new_master_nodes and introduce the
has_new_nodes boolean var that indicates there is at least one new node
in the deployment.
Related-Bug: 1794824
Change-Id: I2f386b5507836deda0816616dd7add8a0b53dfd3
This allows us to deploy openshift without the need to install
openshift-ansible in the mistral container image or in the undercloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Ied75bfbeed71aca83962e60bfc801a2527f5dfba
Change-Id: I1e28e63c8a3a30dfe1e95924f9b4086fcf9513fb
Previously we were only deploying a master node. This commit adds the
worker and infra service to the deployed node and configures it as an
all-in-one node. In order to do so, we need to disable HAproxy when
deploying in all-in-one as the HAproxy instance Openshift deploys on
the infra node conflicts with the one we normally set up. They both
bind ports 80 and 443.
Also removes the useless ComputeServices parameter that only makes
sense in a multinode environment.
Change-Id: I6c7d1b3f2fa5c7b1d9cf695c9e021a4192e5d23a
Depends-On: Ibc98e699d34dc6ab9ff6dce0d41f275b6403d983
Depends-On: I0aa878db62e28340d019cd92769f477189886571
Remove scripts and templates which dealt with Pacemaker and its
resource restarts before we moved to containerized deployments. These
should all now be unused.
Many environments had this mapping:
OS::TripleO::Tasks::ControllerPreConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostPuppetRestart: ../../extraconfig/tasks/post_puppet_pacemaker_restart.yaml
The ControllerPostPuppetRestart is only ever referenced from
ControllerPostConfig, so if ControllerPostConfig is OS::Heat::None, it
doesn't matter what ControllerPostPuppetRestart is mapped to.
Change-Id: Ibca72affb3d55cf62e5dfb52fe56b3b1c8b12ee0
Closes-Bug: #1794720
Previously the path to the openshift-ansible's prerequisites playbook
was hardcoded to
/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml. This
commit introduces the `OpenShiftPrerequisitesPlaybook` heat parameter
to make it configurable.
Also add more explicit description for the other playbook path
parameters and update the default path for OpenShiftUpgradePlaybook
that was broken since the move to openshift-ansible 3.10.
Change-Id: I2260cb8b0cef9650c707d4db917a3281a697912d