42d6c2321d
There is currently an issue with deploying single pod mysql clusters in which restarting or killing the pod will result in a crashloopbackoff. The mysql data is indeed lost and the start script (thinking the cluster was alive before due to the grastate configmap) tries to restore the cluster instead of bootstrapping it. Due to this, if the mysql pod is killed or restarted in the CI, we will lose all the mysql data, will not recover, and this results in a broken environment. When volume.use_local_path_for_single_pod.enabled value is set to true, which we will apply on single node/single pod testing, this patch will deploy a local volume for mysql at the location specified under volume.use_local_path_for_single_pod.host_path The data will be kept intact in case there is a pod restart, as it can read the data again, and recover itself. When it is false, which is the default for non-CI, nothing changes, and an empty dir is used. This data WILL be lost upon restart, so it is advised to use volumes instead for production purposes, by setting Values.volume.enabled to true. task: 28729 Change-Id: I6ec0bd1087eb06b92ced7dc56ff5b6a156aad433 |
||
---|---|---|
.. | ||
bin | ||
etc | ||
monitoring/prometheus | ||
secrets | ||
configmap-bin.yaml | ||
configmap-etc.yaml | ||
configmap-services-tcp.yaml | ||
cron-job-backup-mariadb.yaml | ||
deployment-error.yaml | ||
deployment-ingress.yaml | ||
job-image-repo-sync.yaml | ||
mariadb-backup-pvc.yaml | ||
network_policy.yaml | ||
pdb-mariadb.yaml | ||
pod-test.yaml | ||
secret-dbadmin-password.yaml | ||
secret-sst-password.yaml | ||
secrets-etc.yaml | ||
service-discovery.yaml | ||
service-error.yaml | ||
service-ingress.yaml | ||
service.yaml | ||
statefulset.yaml |