This updates the kubernetes-entrypoint image reference to consume
the publicly available kubernetes-entrypoint image that is built
and maintained under the airshipit namespace, as the stackanetes
image is no longer actively maintained
Change-Id: I5bfdc156ae228ab16da57569ac6b05a9a125cb6a
Signed-off-by: Steve Wilkerson <sw5822@att.com>
- Move the cron manifests to ceph-client chart
- Keep the script that actually does the work in Ceph-OSD
- with this PS, ceph-defragosds will be started after Ceph-Client chart
gets deployed. In the cronjob, it will exec to a running OSD pod and
execute the script.
Change-Id: I6e7f7b32572308345963728f2f884c1514ca122d
This is to update the logic to check for incomplete pgs in ceph
cluster and proceed if there are no incomplete/inactive pgs and
will not wait for healthy ceph cluster.
Change-Id: I026d6cc378053e805680c31d75fdfb40bbb636f5
This is to adjust helm test logic to proceed the deployment if 80% of
osds are up and running in the cluster .
Change-Id: I128266fd374426f75928332690e275b7f0175318
Occasionally the default config can result in attempts
to bind to ipv6 which fail - so we explicity set the
host to ipv4.
Change-Id: I3c01ed0ef7c84cf779d88386c14f7c7bd2003310
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the start script to use `config set`, rather
than `config-key set` which has been depricated in Mimic.
Change-Id: I97d0c4385b016d73aa362c0fc293d235b532810c
Signed-off-by: Pete Birley <pete@port.direct>
This patch set implements pool quotas on each pool in the Ceph
cluster by obtaining the total capacity of the cluster in bytes,
multiplying that by the defined percentage of total data expected
to reside in each pool and by the cluster quota, and setting a
byte quota on each pool that is equal to its expected percentage
of the total cluster quota.
Change-Id: I1686822a74c984e99e9347f55b98219c47decec1
for upgrade strategy for ceph components
This PS uses HelmToolKit function to add
upgrade strategy parameters to ceph Components
Change-Id: I54e71d2a52bd639b3e93fc899c1bf2cd075b5396
This is to update test pod dependency since its getting started
right after mgr service availbe and mgr pods are in init state and
waiting for rbd-pool job.
Change-Id: Iaf9af3ffcf1f4940c1b661a853df0ec4edd99d39
This is to update logic for pool min_size parameter as this is
not getting changed when replication changes from intilization.
Change-Id: I30f99aaf92c3dc83afce10534b1d2ac9402b7fa7
- Update ceph-client chart to
1) By default, enable ceph-client helm test. Update enabler
key in values.yaml to follow pattern as in other charts
2) Add needed dependancy for ceph-client helm tests
3) Update helm test script to reduce output and update
error msgs
4) Removed unwanted ENV variables SPECS and EXPECTED_POOLMINSIZE
- Update gate scripts to run helm test command
Change-Id: I6a0e4f5107e49dac081ac2037bcc0f9c0864793f
This ps exposes the anti-affinity weight value, including
default, that will be consumed by the updated htk function.
Change-Id: Id8eb303674764ef8b0664f62040723aaf77e0a54
This PS adds the security context macros to the ceph-client chart,
and moves the default to read-only-rootfs for all containers.
Change-Id: I2fe03f31cc59e1cda2bf0396ae6e3aca5c440a16
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the ceph charts to make /etc/ceph an emptydir
uniformly across all charts, both ensuring no default config is loaded,
and also permitting read-only filesystems to back the containers.
Additionally /run is uniformly applied across all long running pods
as a memory backed emptydir.
Change-Id: I00d1b15758b7eb4476fb950ddcb38db9a5149ad0
Signed-off-by: Pete Birley <pete@port.direct>
This PS adds emptydirs backing the /tmp directory in pods, which
is required in most cases for full operation when using a read only
filesystem backing the container.
Additionally some yaml indent issues are resolved.
Change-Id: I8b7f1614da059783254aa6efc09facf23fca3cad
Signed-off-by: Pete Birley <pete@port.direct>
This adds the release-annotation to the pod spec for the charts in
openstack-helm-infra. This also adds missing configmap annotations
to charts in openstack-helm-infra
Change-Id: Ie23f0c16a7a21d3929e98928db2bbcef69ae6490
- Move the cronjob from ceph-mon to ceph-client
- Adding ceph-rbd-pool job as dependencies for cronjob
- checkPGs manifest set to true so it will always run
in gate.
Co-Authored-By: Chinasubbareddy Mallavarapu <cr3938@att.com>,
Renis Makadia <renis.makadia@att.com>
Change-Id: I9855d8d22265e78c7e2f5fa7ece69c9ff532ecb2
Enable the iostat mgr module for Ceph. This module show the
current throughput and IOPS done on a Ceph cluster.
Change-Id: I2fe5b47401c15e349a49f345bacd99da39889373
This PS udpates the default image in the chart to the latest OSH image.
Change-Id: Ib8d2a72ad48049fe02560dc4405f0088890b6f64
Signed-off-by: Pete Birley <pete@port.direct>
Change the release of Ceph from 12.2.3 (Luminous) to latest 13.2.2
(Mimic). Additionally use supported RHEL/Centos Images rather then
Ubuntu images, which are now considered deprecated by Redhat.
- Uplift all Ceph images to the latest 13.2.2 ceph-container images.
- RadosGW by default will now use the Beast backend.
- RadosGW has relaxed settings enabled for S3 naming conventions.
- Increased RadosGW resource limits due to backend change.
- All Luminous specific tests now test for both Luminous/Mimic.
- Gate scripts will remove all none required ceph packages. This is
required to not conflict with the pid/gid that the Redhat container
uses.
Change-Id: I9c00f3baa6c427e6223596ade95c65c331e763fb
This ps allows multiple ceph test pods to be present in cluster with
more than one ceph deployment.
Change-Id: Ib8be8fc58e3a374dfcf6845988668433cf43655a
Signed-off-by: Pete Birley <pete@port.direct>
Add helper scripts that are called by a POD to switch
Ceph from DNS to IPs. This POD will loop every 5 minutes
to catch cases where the DNS might be unavailable.
On a POD's Service start switch ceph.conf to using IPs rather
then DNS.
Change-Id: I402199f55792ca9f5f28e436ff44d4a6ac9b7cf9
Largely inspired and taken from Kranthi's PS.
- Add support for creating custom CRUSH rules based off of failure
domains and device classes (ssd & hdd)
- Basic logic around the PG calculator to autodetect the number of
OSDs globally and per device class (required when using custom crush
rules that specify device classes).
Change-Id: I13a6f5eb21494746c2b77e340e8d0dcb0d81a591
Add support for a rack level CRUSH map. Rack level CRUSH support is
enabled by using the "rack_replicated_rule" crush rule.
Change-Id: I4df224f2821872faa2eddec2120832e9a22f4a7c
The balancer module will distribute PGs more evenly across OSDs.
While CRUSH does a good job at this, it is not perfect and hot spots
(where an OSD has more PGs then it's peers) can occur.
Change-Id: Ic45a6bf745bdd09a3f5782e9e8bda89c3d3da2aa
- Throttle down snap trimming as to lessen it's performance impact
(Setting just osd_snap_trim_priority isn't effective enough to throttle
down the impact)
osd_snap_trim_sleep: 0.1 (default 0)
osd_pg_max_concurrent_snap_trims: 1 (default 2)
- Align filestore_merge_threshold with upstream Ceph values
(A negative number disables this function, no change in behavior)
filestore_merge_threshold: -10 (formerly -50, default 10)
- Increase RGW pool thread size for more concurrent connections
rgw_thread_pool_size: 512 (default 100)
- Disable in-memory logs for the ms subsytem.
debug_ms: 0/0 (default 0/5)
- Formating cleanups
Change-Id: I4aefcb6e774cb3e1252e52ca6003cec495556467
This PS updates the mgr check to allow use on hosts with fqdns
defined.
Change-Id: If1cb740e8093fbcafce846234c96db931409b436
Signed-off-by: Pete Birley <pete@port.direct>