This change primarily changes the type of the api_objects yaml structure
to a map, which allows for additional objects to be added by values
overrides (Arrays/Lists are not mutable like this)
Also, in the previous change, some scripts in HTK were modified, while
other were copied over to the Elasticsearch chart. To simplify the chart's
structure, this change also moves the create_s3_bucket script to Elasticsearch,
and reverts the changes in HTK.
Those HTK scripts are no longer referenced by osh charts, and could be candidates
for removal if that chart needed to be pruned
Change-Id: I7d8d7ef28223948437450dcb64bd03f2975ad54d
This change updates how the Elasticsearch chart handles
S3 configuration and snapshot repository registration.
This allows for
- Multiple snapshot destinations to be configued
- Repositories to use a specific placement target
- Management of multiple account credentials
Change-Id: I12de918adc5964a4ded46f6f6cd3fa94c7235112
This is an update to address a behavior change introduced with
0ae8f4d21ac2a091f1612e50f4786da5065d4398.
Job labels if empty/unspecified are taken from the template. If (any)
labels are specified on the job we do not get this behavior.
Specifically if we *apply*:
apiVersion: batch/v1
kind: Job
metadata:
# no "labels:" here
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
then *query* we see:
apiVersion: batch/v1
kind: Job
metadata:
# k8s did this for us!
labels:
application: placement
component: db-init
job-name: placement-db-init
release_group: placement
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
The aforementioned change causes objects we apply and query to look
like:
apiVersion: batch/v1
kind: Job
metadata:
# k8s did this for us!
labels:
application: placement
# nothing else!
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
Current users rely on this behavior and deployment systems use job
labels for synchronization, those labels being only specified in the
template and propagating to the job.
This change preserves functionality added recently and restores the
previous behavior.
The explicit "application" label is no longer needed as the
helm-toolkit.snippets.kubernetes_metadata_labels macro provides it.
Change-Id: I1582d008217b8848103579b826fae065c538aaf0
This patchset will add the capability to configure the
Ceph RBD pool job to leave failed pods behind for debugging
purposes, if it is desired. Default is to not leave them
behind, which is the current behavior.
Change-Id: Ife63b73f89996d59b75ec617129818068b060d1c
As new ceph clients expecting the ceph_mon config as shown below , this
ps will update the configmap.
mon_host = [v1:172.29.1.139:6789/0,v2:172.29.1.139:3300/0],
[v1:172.29.1.140:6789/0,v2:172.29.1.140:3300/0],
[v1:172.29.1.145:6789/0,v2:172.29.1.145:3300/0]
Change-Id: I6b96bf5bd4fb29bf1e004fc2ce8514979da706ed
Directory-based OSDs are failing to deploy because 'python' has
been replaced with 'python3' in the image. This change updates the
python commands to use python3 instead.
There is also a dependency on forego, which has been removed from
the image. This change also modifies the deployment so that it
doesn't depend on forego.
Ownership of the OSD keyring file has also been changed so that it
is owned by the 'ceph' user, and the ceph-osd process now uses
--setuser and --setgroup to run as the same user.
Change-Id: If825df283bca0b9f54406084ac4b8f958a69eab7
This ps adds the lookback duration of 5m to the systemd-monitor to avoid
looking back indefinitely in journal log and causing the alert to stick around.
Change-Id: Ia32f043c0c7484d0bb92cfc4b68b506eae8e9d72
v1.2.0 of cert-manager noew supports overriding the default value
of ingress certificate expiry via annotations. This PS add the
required annotation.
Change-Id: Ic81e47f24d4e488eb4fc09688c36a6cea324e9e2
For security reasons, strict access permission is given to
the mariadb data directory /var/lib/mysql
Change-Id: I9e55a7e564d66874a35a54a72817fa1237a162e9
This patch resolves a helm test problem where the test was failing
if it found a PG state of "activating". It could also potentially
find a number of other states, like premerge or unknown, that
could also fail the test. Note that if these transient PG states are
found for more than 3 minutes, the helm test fails.
Change-Id: I071bcfedf7e4079e085c2f72d2fbab3adc0b027c
Since chart v0.1.3 SLM policies have been supported, but we still
run curator in the gate, and its manifest toggles still default to
true
Change-Id: I5d8a29ae78fa4f93cb71bdf6c7d1ab3254c31325
This removes the functionality to perform envsubst in the feature
gate script to prevent users with specific env set running into
unexpected error. This feature will be re-visited in the future to
be made more robust.
Signed-off-by: Tin Lam <tin@irrational.io>
Change-Id: I6dcfd4dad138573294a9222e4e7af80c9bff4ac0
This patchset enables TLS path between Prometheus and Grafana.
Grafana pull data from Prometheus. As such, Prometheus is the
server and Grafana is the client for TLS handshake.
Change-Id: I50cb6f59472155415cff16a81ebaebd192064d65
This patchset enabled TLS path for Prometheus when it acts as
a server. Note that TLS is not directly terminated at Prometheus.
TLS is terminated at apache proxy which in turn route request
to Prometheus.
Change-Id: I0db366b6237a34da2e9a31345d96ae8f63815fa2
The flag storage.tsdb.retention is deprecated and generates warnings
on startup storage.tsdb.retention.time is the new flag.
storage.tsdb.wal-compression is now set as the default in v2.20
and above and is no longer needed
Change-Id: I66f861a354a3cdde69a712ca5fd8a1d1a1eca60a
Environment variable MYSQL_HISTFILE is added to mariadb container
to disable storing client mysql history to ~/.mysql_history file.
Change-Id: Ie95bc1f830fbf34d30c73de07513299115d8e8c5
When autoscaling is disabled after pools are created, there is an
opportunity for some autoscaling to take place before autoscaling
is disabled. This change checks to see if autoscaling needs to be
disabled before creating pools, then checks to see if it needs to
be enabled after creating pools. This ensures that autoscaling
won't happen when autoscaler is disabled and autoscaling won't
start prematurely as pools are being created when it is enabled.
Change-Id: I8803b799b51735ecd3a4878d62be45ec50bbbe19
fsGroup is not supported inside the container securityContext,
only inside the pod. This drops a configuration that is not
valid and makes things deployable.
Change-Id: I956a1de107768c3fadc704722db83eb661cd25d2
The autoscaler was introduced in the Nautilus release. This
change only sets the pg_num value for a pool if the autoscaler
is disabled or the Ceph release is earlier than Nautilus.
When pools are created with the autoscaler enabled, a pg_num_min
value specifies the minimum value of pg_num that the autoscaler
will target. That default was recently changed from 8 to 32
which severely limits the number of pools in a small cluster per
https://github.com/rook/rook/issues/5091. This change overrides
the default pg_num_min value of 32 with a value of 8 (matching
the default pg_num value of 8) using the optional --pg-num-min
<value> argument at pool creation and pg_num_min value for
existing pools.
Change-Id: Ie08fb367ec8b1803fcc6e8cd22dc8da43c90e5c4
Challenge:
Now remote_ks_admin and remote_rgw_user are using for user labels
of backup target openstack cloud.
When the backup user doesn't exist and we can enable job_ks_user
manifest.
But job_ks_user uses .Vaules.secrets.identity.admin and mariadb,
while secret-rgw and cron-job-backup-mariadb use .Values.secrets.
identity.remote_ks_admin and remote_rgw_user.
It requires to use same values for admin and remote_ks_admin,
and for mariadb and remote_rgw_user.
Seems it isbreaking values consistency.
Suggestion:
Now providing 2 kinds of backup - pvc and swift.
"remote_" means the swift backup.
In fact, mariadb chart has no case to access to keystone except
swift backup. So we can remove remote_xx_* prefix and there is
no confusion.
Change-Id: Ib82120611659bd36bae35f2e90054642fb8ee31f