At the moment the 'OS::TripleO::Services::Timesync' service is
synonymous to 'OS::TripleO::Services::Ntp'. Let's use the more generic
Timesync service to pick up the new default in the event the value for
'OS::TripleO::Services::Timesync' changes.
This better aligns with the rest of the roles.
Change-Id: I44f706ce7dd1909ffd3805337fc6d9a5ce6de80f
The OpenShift roles should include the OS::TripleO::Services::Rhsm
service for Red Hat Subscription Management so that the provisioned
nodes can register with a Satellite or CDN.
Add the Podman service to OpenShifAllInOne to be more consistent with
other roles.
Change-Id: I08862635c68eddbb0940863c43867ece1b289ee5
We expect the the Keepalived and HAproxy services to be deployed on the
OpenShift master nodes, let's require them in the openshift heat
environment file. This prevents an issue when the docker-ha environment
is loaded because it would redefine these resources.
Change-Id: I57a7ea854bd8db4e20af1a608a6937604c0e3bd2
The current approach has several disadvantages:
- Requires shelling out to the hiera CLI, and is coupled to the puppet hieradata
- The bootstrap_nodeid is only unique per Role, not per service, so if you
deploy a service spanning more than one role it will evaluate true for
every role, not only once.
Instead lets use the per-service short_bootstrap_node_name, which is
available directly via the ansible inventory now ref
https://review.openstack.org/#/c/605046/
This is the first part of a cleanup for inconsistent handling of
bootstrap node evaluation, triggered by bug #1792613
Change-Id: Iefe4a37e8ced6f4e9018ae0da00e2349390d4927
Partial-Bug: #1792613
Depends-On: Idcee177b21e85cff9e0bf10f4c43c71eff9364ec
This is no longer handled as the TLS handling tasks were converted
to ansible, and in the context of this series we need to remove it
because it references bootstrap_nodeid
Partial-Bug: #1792613
Change-Id: Ib32177b116f148f007574847320566e32240cf96
It was using a wrong name, which came by accident since it was
introduced to the sample environment generator.
Change-Id: I154af6d0b7ebf5cd339d5d06eaaf9b1ab66814b0
Related-Bug: #1796022
The standalone deployer adds "ansible_connection: local" to facilitate
all-in-one deployments. This patch passes on this setting when generating
the inventory used by ceph-ansible.
Change-Id: I694c4b3c7fb98e11d7a52eed4072a37471c0e405
The pool configuration for an ha deployment of designate looks quite
a bit different from the nonha one, so it's useful to provide a
separate example environment for it.
Change-Id: I69b3c44b368bab3fff885e67fa6523fbb1c80347
Add a systemd service that creates the loopback device required by
cinder's iSCSI backend on system startup.
This patch also consolidates the host_prep_tasks for the HA and non-HA
versions of the cinder-volume service. The list of tasks is identical,
and rather than repeating it in each template, the tasks are defined
once in cinder-common.yaml.
Closes-Bug: #1581092
Change-Id: Icc04003a9e90b66720d968c6c8f1c687156b677e
With the default setting, the keepalived that we deploy on the master
node collides with the one that is setup on the undercloud. We simply
need to use a different virtual_router_id_base to prevent
virtual_router_id collision.
Change-Id: I92ef081a111f93ddce4ec42400bcb648b7f7def0
While introducing the openshift-node service in 7373adc72e, some code
was moved around and that broke the OpenShift external_deploy_task
playbook in the case of a stack update due to undefined ansible
variable.
Rename the new_masters var into new_master_nodes and introduce the
has_new_nodes boolean var that indicates there is at least one new node
in the deployment.
Related-Bug: 1794824
Change-Id: I2f386b5507836deda0816616dd7add8a0b53dfd3
This allows us to deploy openshift without the need to install
openshift-ansible in the mistral container image or in the undercloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Ied75bfbeed71aca83962e60bfc801a2527f5dfba
Change-Id: I1e28e63c8a3a30dfe1e95924f9b4086fcf9513fb
Previously we were only deploying a master node. This commit adds the
worker and infra service to the deployed node and configures it as an
all-in-one node. In order to do so, we need to disable HAproxy when
deploying in all-in-one as the HAproxy instance Openshift deploys on
the infra node conflicts with the one we normally set up. They both
bind ports 80 and 443.
Also removes the useless ComputeServices parameter that only makes
sense in a multinode environment.
Change-Id: I6c7d1b3f2fa5c7b1d9cf695c9e021a4192e5d23a
Depends-On: Ibc98e699d34dc6ab9ff6dce0d41f275b6403d983
Depends-On: I0aa878db62e28340d019cd92769f477189886571
Remove scripts and templates which dealt with Pacemaker and its
resource restarts before we moved to containerized deployments. These
should all now be unused.
Many environments had this mapping:
OS::TripleO::Tasks::ControllerPreConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostPuppetRestart: ../../extraconfig/tasks/post_puppet_pacemaker_restart.yaml
The ControllerPostPuppetRestart is only ever referenced from
ControllerPostConfig, so if ControllerPostConfig is OS::Heat::None, it
doesn't matter what ControllerPostPuppetRestart is mapped to.
Change-Id: Ibca72affb3d55cf62e5dfb52fe56b3b1c8b12ee0
Closes-Bug: #1794720
With OOO we configure a separate DB for placement for the undercloud and
overcloud since the beginning.
But the placement_database config options were reverted with
https://review.openstack.org/#/c/442762/1 , which means so far even if
the config option was set, it was not used. With rocky the options were
introduced again which is not a problem on a fresh installed env, but on
upgrades from queens to rocky.
We should use the same DB for both fresh deployments on and upgrades to
rocky before we switch to the new DB as part of the extraction of placement.
Closes-Bug: #1797119
Change-Id: I6eb8cb62d337fa4f6e6542391de251519e246923
With this patch, we're able to deploy a "standalone" stack using
podman on a fully-enabled SELinux system.
Change-Id: I4bfa2e1d3fe6c968c4d4a2ee1c2d4fb00a1667a1
Rocky added nova-scheduler worker support so we need to be able to
configure (and tune it) as necessary.
Change-Id: Idd702e01b67a2f25eb621d1251e8457ea376f51b
Closes-Bug: #1796933
The octavia services need to set the owner of their log directories and
files to the octavia user.
Closes-Bug: #1796934
Change-Id: I6d7ac0630cc586794469ab5c572933825de0dc20
To match the previous functionality when not using config-download, the
common deploy step tasks should be skipped for already deployed nodes
when using --skip-deploy-identifier.
This patch adds a task to check if one of the json configuration files
created by the common tasks already exists. If it does, and
--skip-deploy-identifier has caused an empty DeployIdentifier parameter
value, the tasks will be skipped for that node.
Change-Id: I711dbb00a9c34dbd96ef179ef41bff281b0001d1
Closes-Bug: #1796924
Modified heat templates to add support for containerization for
Liquidio compute service. Fixed a issue in the ProviderMappings
in Liquidio heat templates.
Depends-On: Ice2baafae2fb1011e16d83c83b5c85f721f6d679
Change-Id: Id4c754f402091e17a974972408919332aa06cd11
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Iada64874432146ef311682f26af5990469790ed2
This will pull the online data migrations out of the upgrade
maintenance window and let them be performed after the main upgrade
phase while the cloud is already operational.
The online part of the service upgrades can be run using:
openstack overcloud external-upgrade run --tags online_upgrade
or per-service like:
openstack overcloud external-upgrade run --tags online_upgrade_nova
openstack overcloud external-upgrade run --tags online_upgrade_cinder
openstack overcloud external-upgrade run --tags online_upgrade_ironic
Change-Id: I35c8d9985df21b3084fba558687e1f408e5a0878
Closes-Bug: #1793332
Previously the path to the openshift-ansible's prerequisites playbook
was hardcoded to
/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml. This
commit introduces the `OpenShiftPrerequisitesPlaybook` heat parameter
to make it configurable.
Also add more explicit description for the other playbook path
parameters and update the default path for OpenShiftUpgradePlaybook
that was broken since the move to openshift-ansible 3.10.
Change-Id: I2260cb8b0cef9650c707d4db917a3281a697912d
Until now, it's loaded from within the container, this doesn't
work with SELinux separation.
Change-Id: I3d63d1df7496d3b8a5883b07e9d40aa21153c086
Related-Bug: 1794550
Currently the ip_vs module is loaded from the keepalived container,
and if it works in a non-selinux separated env, it doesn't work with
podman.
Change-Id: I71e638bedde3836e05cffab53ad80bfd35313a31
Related-Bug: 1794550