Long hostnames can cause the 63 char name limit to be exceeded.
Truncate the hostname if hostname > 20 char.
Change-Id: Ieb7e4dafb41d1fe3ab3d663d2614f75c814afee6
This adds session affinity to Prometheus's ingress. This allows for
the use of cookies for Prometheus's session affinity
Change-Id: I2e7e1d1b5120c1fb3ddecb5883845e46d61273de
This updates the Nagios image tag to include the updated plugin
for querying Elasticsearch for alerting on logged events
Change-Id: Idd61d82463b79baab0e94c20b32da1dc6a8b3634
This PS updates the version of the ingress controller image used.
This brings in the ability to update the ingress configuration without
reloading nginx. There may also need to be some changes for prom based
monitoring:
* https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0100
Change-Id: Ia0bf3dbb9b726f3a5cfb1f95d7ede456af13374a
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the ingress chart to allow the status pport to be
changed.
Change-Id: Ia38223c56806f6113622a809e792b4fedd010d87
Signed-off-by: Pete Birley <pete@port.direct>
Add support for a rack level CRUSH map. Rack level CRUSH support is
enabled by using the "rack_replicated_rule" crush rule.
Change-Id: I4df224f2821872faa2eddec2120832e9a22f4a7c
This moves to update the host used for the ceph health checks, as
we should be checking the ceph-mgr service directly for ceph
metrics instead of trying to curl the host directly.
This also changes the ceph_health_check to use the base-os
hostgroup instead of the placeholder ceph-mgr host group, as we're
just executing a simple check against the ceph-mgr service.
This also adds default configuration values for the
max_concurrent_checks (60) and check_workers (4) values instead
of leaving them at the defaults Nagios uses (0 and # cores,
respectively)
Change-Id: Ib4072fcd545d8c05d5e9e4a93085a8330be6dfe0
This updates the Nagios image to use a tag that includes a fix for
the service discovery mechanism used for updating host checks.
After moving the Nagios chart to either run in shared or host PID
namespaces, the service discovery mechanism no longer worked due
to the plugin attempting to restart PID 1 instead of determining
the appropriate PID to restart.
For reference, see:
https://review.gerrithub.io/#/c/att-comdev/nagios/+/432205/
Change-Id: Ie01c3a93dd109a9dc99cfac5d27991583546605a
This adds session affinity to Nagios's ingress. This allows for
the use of cookies for Nagios's session affinity
Change-Id: I6054a92f644dc533dd06d35a2541fb44d46cba88
Change deployment script for rgw to not use the docker
bridge for public and cluster network overrides. Instead,
calculate network values in same way as other ceph multinodes
deployment steps
Change-Id: I2bacd1af1cc331d76a5d61f3b589ca6ef80b1b2e
Request from downstream to use 10GB journal sizes. Currently journals
are created manually today, but there is upcoming work to have the
journals created by the Helm charts themselves. This value needs to be
put in as a default to ensure journals are sized appropiately.
Change-Id: Idaf46fac159ffc49063cee1628c63d5bd42b4bc6
This reverts commit 5c2859c3e9026e464bf0c35b591aaae810ff2a1c.
This commit breaks the ability to declare users to use with rally/helm test - and needs to be refactored to match the commit message's intent.
Change-Id: I2bc66ef40694c277058b4324b8a3528f4f25d1d1
Currently the cronjob is broken due to syntax and
permission issues.
Additionally move the cronjob from once a month to
every 15 minutes, and automatically disable the job
unless explicitly enabled.
Change-Id: Id72bdb286c805ccb0ea4e9fcf65fabca94a180dd
The ceph_health check in Nagios incorrectly sets the warning and
error level to 0. The ceph_health_status metric's value of 0
indicates the cluster is healthy, while 1 indicates a warning and
2 indicates an error state. The Nagios check for ceph_health is
updated to reflect these values
Change-Id: Iffe80f1c34f6edee6370dd7e707e5f55f83f1ec1
This updates the Prometheus scrape configuration to use the
service based discovery mechanism instead of endpoints. This
removes issues associated with multiple ceph-mgr replicas deployed
Change-Id: I2c557af0c7200d0c4aea646c5f9ecd1a070db33e
If OSH_INFRA_PATH is never used in the openstack-helm-infra repository,
as all the references are using relative paths.
The keystone script is not using a relative path, and relies on
OSH_INFRA_PATH to be defined to work.
This is a problem, because when it is not defined, the expected path
for ldap chart is /ldap, which is an incorrect path.
This fixes the problem by ensuring the path is relative.
Change-Id: I04a8d5c074b7c1e6fa66617bbb907f2ad4dcb3af
This moves Nagios to run as child processes of either
the pause container or use the hosts init system (for k8s <1.10)
to prevent defunct process sprawl
Change-Id: I6a93d446577674b0b012f9567d5e6a5794ebc44b
The balancer module will distribute PGs more evenly across OSDs.
While CRUSH does a good job at this, it is not perfect and hot spots
(where an OSD has more PGs then it's peers) can occur.
Change-Id: Ic45a6bf745bdd09a3f5782e9e8bda89c3d3da2aa
With new ceph luminous ceph.rules are obsolete.
Added a new rule for ceph-mgr count
Changed ceph_monitor_quorum_count to ceph_mon_quorum_count
Updated ceph_cluster_usage_highas ceph_cluster_used_bytes,
ceph_cluster_capacity_bytes aren't valid
Updated ceph_placement_group_degrade_pct_high as
ceph_degraded_pgs, ceph_total_pgs aren't valid
Updated ceph_osd_down_pct_high as ceph_osds_down,
ceph_osds_up aren't available, ceph_osd_up is
available but ceph_osd_down isn't. Need to
calculate the down based on count(ceph_osd_up==0)
and total osd using count(ceph_osd_metadata)
Removed ceph_monitor_clock_skew_high as the metric
ceph_monitor_clock_skew_seconds isn't valid anymore
Added new alarms ceph_osd_down, ceph_osd_out
Implements: prometheus ceph.rules changes with new valid metrics
Closes-Bug: #1800548
Change-Id: Id68e64472af12e8dadffa61373c18bbb82df96a3
Signed-off-by: Kranthi Guttikonda <kranthi.guttikonda@b-yond.com>
This patch set cleans up the script to be consistent with other OSH
installation scripts.
Change-Id: I212cd0cf0e818f1fc924b9b690d18f5d107b850b
Signed-off-by: Tin Lam <tin@irrational.io>
This updates the ceph-mon and ceph-osd charts to use the release
name for the hostpath defined for mounting the /var/log/ceph
directories to. This gives us a mechanism for creating unique log
directories for multiple releases of the same chart without the
need for specifying an override for each deployment of that chart
Change-Id: Ie6e05b99c32f24440fbade02d59c7bb14d8aa4c8
- Throttle down snap trimming as to lessen it's performance impact
(Setting just osd_snap_trim_priority isn't effective enough to throttle
down the impact)
osd_snap_trim_sleep: 0.1 (default 0)
osd_pg_max_concurrent_snap_trims: 1 (default 2)
- Align filestore_merge_threshold with upstream Ceph values
(A negative number disables this function, no change in behavior)
filestore_merge_threshold: -10 (formerly -50, default 10)
- Increase RGW pool thread size for more concurrent connections
rgw_thread_pool_size: 512 (default 100)
- Disable in-memory logs for the ms subsytem.
debug_ms: 0/0 (default 0/5)
- Formating cleanups
Change-Id: I4aefcb6e774cb3e1252e52ca6003cec495556467