This PS fixed some imcompatibilities of inherited mariadb config with
docker-entrypoint.sh script that is now used to perform initial mariadb
nodes setup and mariadb-upgrade at startup.
Also added x509 requirement for root and audit users
connections.
Change-Id: Ic5ad2e692b64927fc73962fe0cc250a9d682114c
The names of a few configuration variables have changed in version 1.9.
EnableRealIp to EnableRealIP
HttpAccessLogPath to HTTPAccessLogPath
whitelist to allowlist
Whitelist to Allowlist
Additionally,
ajp_temp_path
is no longer valid.
Change-Id: I2ebb658bd237216c43306dab6cd7f7a1ca6388ac
This PS enables auto-upgrade feature from official mariadb docker
entrypoint script.
Also switching mariadb image to the official from docker.io/mariadb
repo and adding temp volime mount to mariadb-server pods created by
mariadb-operator.
Change-Id: Ie3a02e546fd2a56948177b97c009eab35b42776a
This PS adds a possibility to limit (to throttle) the number of
simultaneously uploaded backups while keeping the logic on the client
side using flag files on remote side. The main idea is to have an
ability to limit number of simultaneous remote backups upload sessions.
Change-Id: I5464004d4febfbe20df9cd41ca62ceb9fd6f0c0d
The default rabbitmq image disables metrics collection via the management
api. This is implemented by adding a file named:
/etc/rabbitmq/conf.d/management_agent.disable_metrics_collector.conf
with the contents:
management_agent.disable_metrics_collector = true
The prometheus exporter currently used by osh requires this value to be
false.
This change was introduced when rabbit introduced the integrated
prometheus exporter:
https://github.com/docker-library/rabbitmq/issues/419
Change-Id: I9a94f49a7827bb4725ed3fd98404e637bfefa086
This PS removes mariadb-verify-server sidecar container from
mariadb-backup cronjob in order to make backup process more resilient.
Change-Id: I2517c2de435ead34397ca0483610f511c8035bdf
This PS is to update es curator for elasticsearch v8. Curator 5.x
is not compatible with es v8.
Changes are needed for config.yml:
https://github.com/elastic/curator#new-client-configuration
No changes are required for the actions file.
Change-Id: I6968e22c7ae5f630e1342f47feee0c2c494b767f
For TLS test jobs on Ubuntu Jammy when we run
dnsmasq on the master node needed for testing
we get the error:
"failed to create inotify: Too many open files"
By default the number of inotify instances on Jammy
is 128. We increase this up to 256.
Change-Id: I07c8a0f909608b6e44040ffeefc6ab576236c93f
The deploy-env playbook can fail with an error stating that
registry_namespaces is not defined in some cases. This change moves
the initialization of registry_namespaces so that buildset_registry
is not required for it to be set when other conditions are not met.
Change-Id: I160e7d479008fd3afd460382691673b92bd042c9
Some es curator images do not use /usr/bin/curator for the executable. This PS
makes the path configurable via values.yaml.
Change-Id: I640e0f4928683810ef0b4a6d4dbac9bdf865aa2a
When using Rook for managing Ceph clusters we have
to provision a minimal set of assets (keys, endpoints, etc.)
to make Openstack-Helm charts work with these Ceph clusters.
Rook provides CRDs that can be used for managing Ceph assets
like pools/keyrings/buckets etc. but Openstack-Helm can not
utilize these CRDs. To support these CRDs in OSH would
require having lots of conditionals in OSH templates since
we still want OSH to work with OSH ceph-* charts.
Change-Id: If7fe29052640e48c37b653e13a74d95e360a6d16
This PS adds staggered backups possibility by adding anti-affinity rules
to backups cronjobs that can be followed across several namespaces to
decrease load on remote backup destination server making sure that at
every moment in time there is only one backup upload is in progress.
Change-Id: If49791f866a73a08fb98fa0e0b4854042d079c66
This PS adds mariadb-cluster chart based on mariadb-operator. Also for
some backward compartibility this PS adds mariadb-backup chart and
prometheus-mysql-exporter chart as a separate ones.
Change-Id: I3f652375cce2e3b45e095e08d2e6f4ae73b8d8f0
The PR synchronized this script with that
used in the openstack-helm repo.
Let's use the same script in both repos.
The related PR for the openstack-helm repo
is coming.
Change-Id: I5cfaad8ebfd08790ecabb3e8fa480a7bf2bb7e1e
We don't need this for tests and it is better to
keep the test env minimal since the test hardware
is limited.
Change-Id: I0b3f663408c1ef57ad25a4d031b706cb6abc87a9
When using Rook for managing Ceph we can use
Rook CRDs to create S3 buckets and users.
This PR adds bucket claim template to the
elasticsearch chart. Rook creates a bucket for
a bucket claim and also creates a secret
containing the credentials to get access to this
bucket. So we also add a snippet to expose
these credentials via environment variables to
containers where they are needed.
Change-Id: Ic5cd35a5c64a914af97d2b3cfec21dbe399c0f14
- In case we deploy Ceph on a multi-node env we have
to prepare the loop devices on all nodes. For this
we moved loop devices setup to the deploy-env
Ansible role.
For simplicity we need the same device on all nodes,
so we create a loop device with a big
minor number (/dev/loop100 by default) hoping
that only low minor numbers could be busy.
- For test jobs we don't need to use different devices
for OSD data and metadata. There is no
any benefit from this for the test environment.
So let's keep it simple and put both OSD data and metadata
on the same device.
- On multi-node env Ceph cluster needs cluster members
see each other, so let's use pod network CIDR.
Change-Id: I493b6c31d97ff2fc4992c6bb1994d0c73320cd7b
The ClusterRole and ClusterRoleBinding definitions for the
ceph-rgw-pool job don't take the namespace into account. This isn't
an issue for deployments that include a single Ceph cluster, but
this change adds the namespace to the names of those resources to
allow the job to be deployed correctly in multiple namespaces.
Change-Id: I98a82331a52702c623941f839d1258088813f70e
The Reef release disallows internal pools from being created by
clients, which means the ceph-client chart is no longer able to
create the .rgw.root pool and configure it. The new ceph-rgw-pool
job deletes and re-creates the ceph-rbd-pool job after ceph-rgw has
been deployed so that job can configure the .rgw.root pool
correctly.
Change-Id: Ic3b9d26de566fe379227a2fe14dc061248e84a4c
It used to configure /etc/hosts in two different places.
The buildset registry record was added while configuing
Containerd and then this record was removed while
configuring Kubernetes.
The PR adds the buildset registry record to the /etc/hosts
template and the task is moved to the tasks/main.yaml.
Change-Id: I7d1ae6c7d33a33d8ca80b63ef9d69decb283e0a6
The role tried to include non-existing file
which was forgotten while we moved the role to this repo.
This inclusion is only actual for cases when we
consume images from a buildset registry.
Change-Id: I1510edf7bdc78f9c61f7722e2c7848e152edf892