If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d
When using helm3 to deploy, it fails as helm 3
no longer supports rbac.authorization.k8s.io/v1beta1,
but v1 can support helm2 and helm3 (liujinyuan@inspur.com).
Change-Id: I8e0ceb0c0991fd48b5b6a1b688a5c1b91f58c02e
Some updates to rgw config like zone or zonegroup changes that can
be done during bootstrap process require rgw restart.
Add restart job which when enabled will use
'kubectl rollout restart deployment'
in order to restart rgw
This will be more useful in greenfield scenarios where
we need to setup zone/zonegroups right after rgw svc up which
needs to restart rgw svc.
Change-Id: I6667237e92a8b87a06d2a59c65210c482f3b7302