When using Rook for managing Ceph we can use
Rook CRDs to create S3 buckets and users.
This PR adds bucket claim template to the
elasticsearch chart. Rook creates a bucket for
a bucket claim and also creates a secret
containing the credentials to get access to this
bucket. So we also add a snippet to expose
these credentials via environment variables to
containers where they are needed.
Change-Id: Ic5cd35a5c64a914af97d2b3cfec21dbe399c0f14
- In case we deploy Ceph on a multi-node env we have
to prepare the loop devices on all nodes. For this
we moved loop devices setup to the deploy-env
Ansible role.
For simplicity we need the same device on all nodes,
so we create a loop device with a big
minor number (/dev/loop100 by default) hoping
that only low minor numbers could be busy.
- For test jobs we don't need to use different devices
for OSD data and metadata. There is no
any benefit from this for the test environment.
So let's keep it simple and put both OSD data and metadata
on the same device.
- On multi-node env Ceph cluster needs cluster members
see each other, so let's use pod network CIDR.
Change-Id: I493b6c31d97ff2fc4992c6bb1994d0c73320cd7b
The ClusterRole and ClusterRoleBinding definitions for the
ceph-rgw-pool job don't take the namespace into account. This isn't
an issue for deployments that include a single Ceph cluster, but
this change adds the namespace to the names of those resources to
allow the job to be deployed correctly in multiple namespaces.
Change-Id: I98a82331a52702c623941f839d1258088813f70e
The Reef release disallows internal pools from being created by
clients, which means the ceph-client chart is no longer able to
create the .rgw.root pool and configure it. The new ceph-rgw-pool
job deletes and re-creates the ceph-rbd-pool job after ceph-rgw has
been deployed so that job can configure the .rgw.root pool
correctly.
Change-Id: Ic3b9d26de566fe379227a2fe14dc061248e84a4c
It used to configure /etc/hosts in two different places.
The buildset registry record was added while configuing
Containerd and then this record was removed while
configuring Kubernetes.
The PR adds the buildset registry record to the /etc/hosts
template and the task is moved to the tasks/main.yaml.
Change-Id: I7d1ae6c7d33a33d8ca80b63ef9d69decb283e0a6
The role tried to include non-existing file
which was forgotten while we moved the role to this repo.
This inclusion is only actual for cases when we
consume images from a buildset registry.
Change-Id: I1510edf7bdc78f9c61f7722e2c7848e152edf892
The motivation is to reduce the code base and get rid
of unnecessary duplications. This PR is moves bandit
tasks from the osh-infra-bandit.yaml playbook
to the osh-bandit role. Then we can use this role for the
same job in OSH.
Change-Id: I9489a8c414e6679186e6c399243a7c0838df812a
Roll back Rook in the openstack-support-rook Zuul job to the 1.12.4
release to work around a problem with ceph-rook-exporter resource
conflicts while the issue is investigated further.
Change-Id: Idabc1814e9b8665c0ce63e2efd5ad94bf193f97a
This PS mounts extra 80Gb volume if available and mounts it to
/opt/ext_vol. It also alters docker and containerd configs to move their
root folder to that extra volume. This helps zuul gates to succeed when
a node with 40Gb volume is assigned to a zuul gate.
Change-Id: I1c91b13c233bac5ebfe6e3cb16d4288df2c2fe80
This change adds an openstack-support-rook zuul job to test
deploying Ceph using the upstream Rook helm charts found in the
https://charts.rook.io/release repository. Minor changes to the
storage keyring manager job and the mon discovery service in the
ceph-mon chart are also included to allow the ceph-mon chart to be
used to generate auth keys and deploy the mon discovery service
necessary for OpenStack.
Change-Id: Iee4174dc54b6a7aac6520c448a54adb1325cccab
To make it easier to maintain the jobs all experimental
jobs (those which are not run in check and gate pipelines)
are moved to a separate file. They will be revised later
to use the same deploy-env role.
Also many charts use Openstack images for testing this
PR adds 2023.1 Ubuntu Focal overrides for all these charts.
Change-Id: I4a6fb998c7eb1026b3c05ddd69f62531137b6e51
Update liveness probe script to accept pods either sending
or receiving a SST, and avoid killing them.
Change-Id: I4ad95c45a7ab7e5e1cec2b4696671b6055cc10e7
Add option to define an extra command (or commands via multiline yaml
value) that will run at the end of the poststart script. Specific
deployments can benefit from extra cleanup/checks.
Change-Id: I7c26292dc65dc0bfd4374b1f5577696fca89140f
This role works both for singlenode and multinode
inventories. The role installs all necessary prerequisites
and deploys K8s with Containerd as a container runtime.
The idea is to use this role to deploy
all test singlenode/multinode environments for all test jobs.
This PR wraps into a role playbooks that
we are currently using for multinode compute-kit tests.
Change-Id: I41bbe80d806e614a155e6775c4505a4d81a086e8
There exists a case for bluestore OSDs where the OSD init process
detects that an OSD has already been initialized in the deployed
Ceph cluster, but the cluster osdmap does not have an entry for it.
This change corrects this case to zap and reinitialize the disk
when OSD_FORCE_REPAIR is set to 1. It also clarifies a log message
in this case when OSD_FORCE_REPAIR is 0 to state that a manual
repair is necessary.
Change-Id: I2f00fa655bf5359dcc80c36d6c2ce33e3ce33166
Make selenium v4 syntax optional using the same pattern as
https://review.opendev.org/c/openstack/openstack-helm-infra/+/892708
See:
https: //review.opendev.org/c/openstack/openstack-helm-infra/+/883894/5/grafana/templates/bin/_selenium-tests.py.tpl
Change-Id: I744b721750c474db9fecbd46280d30cfb8347a6f
This patchset allows enabling vencrypt for VNC, based on a
downstream patchset. [1]
Primary differences:
- script to generate pod-specific certs has been moved under
values.conf.vencrypt.cert_init_sh to allow for it to be
overridden if necessary
- leaves the creation of a (sub)issuer for vencrypt as
outside the scope of this (and the nova) chart
- issuer to use to sign these certs configurable under:
values.conf.vencrypt.issuer.kind
values.conf.vencrypt.issuer.name
- added manifests.role_cert_manager to control creation of
roles needed to create/update certs
1. https://github.com/vexxhost/atmosphere/pull/483
Change-Id: I955015874fed2b24570251c4cad01412bbab6045
This PS replaces deprecated kubernetes.io/ingress.class annotation with
spec.ingressClassName field that is a reference to an IngressClass
resource that contains additional Ingress configuration, including the
name of the Ingress controller.
https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#deprecating-the-ingress-class-annotation
Change-Id: I9953d966b4f9f7b1692b39f36f434f5055317025
Co-authored-by: Sergiy Markin <smarkin@mirantis.com>
Co-authored-by: Leointii Istomin <listomin@mirantis.com>
Signed-off-by: Anselme, Schubert (sa246v) <sa246v@att.com>
For selenium v3 the proper syntax is
link = browser.find_element_by_link_text(link_name)
not
link = browser.find_element_by_text_link(link_name)
Change-Id: I9f6062bae5caaa840208e90e8f29b63bf52d113b
This change converts the readiness and liveness probes in the Ceph
RGW chart to use the functions from the Helm toolkit rather than
having hard-coded probe definitions. This allows probe configs to
be overridden in values.yaml without rebuilding charts.
Change-Id: Ia09d06746ee06f96f61a479b57a110c94e77c615