If thread launch_cluster_Monitor() and launch_leader_election() operates on the configmap at the same time, Will cause a error 'Exception in thread "Thread-1"'.
This error will cause the thread to get stuck. Configmap will not be updated and the error "data too old" will be reported.
Just passing kubernetes_API exceptions is not enough, all are more appropriate.
Change-Id: I6baa9ece474f9c937fe9bce2231ef500562e0406
We need flexibility to add securityContext to ks-user job , so that it can be executed without elevated privileges.
Change-Id: I24544015816d57d86c1e69f44b90b6b0271e76a4
The fedora and centos jobs have not been used or maintained for
quite some time. This change removes them and the related notes.
Also removed an outdate note about disabling all the experimental
and periodic jobs.
Change-Id: Ic8eb628e21c49957bdcd10a8d69d850ec921b6d6
This change updates the ceph.conf update job as follows:
* renames it to "ceph-ns-client-ceph-config"
* consolidates some Roles and RoleBindings
This change also moves the logic of figuring out the mon_host addresses
from the kubernetes endpoint object to a snippet, which is used by the
various bash scripts that need it.
In particular, this logic is added to the rbd-pool job, so that it does
not depend on the ceph-ns-client-ceph-config job.
Note that the ceph.conf update job has a race with several other jobs
and pods that mount ceph.conf from the ceph-client-etc configmap while
it is being modified. Depending on the restartPolicy, pods (such as the
one created for the ceph-rbd-pool job) may linger in StartError state.
This is not addressed here.
Change-Id: Id4fdbfa9cdfb448eb7bc6b71ac4c67010f34fc2c
This change fixes two issues with the recently introduced [0] job that
updates "ceph.conf" inside ceph-client-etc configmap with a discovered
mon_host value:
1. adds missing metadata.labels to the job
2. allows the job to be disabled
(fixes rendering when manifests.job_ns_client_ceph_config = false)
0: https://review.opendev.org/c/openstack/openstack-helm-infra/+/812159
Change-Id: I3a8f1878df4af5da52d3b88ca35ba0b97deb4c35
The log-runner previously was not included in the mandatory access
control (MAC) annotation for the OSD pods, which means it could not
have any AppArmor profile applied to it. This patchset adds that
capability for that container.
Change-Id: I11036789de45c0f8f66b51e15f2cc253e6cb230c
A previous change to move the linting job to helm3 removed the
chart testing role. This change adds it back.
Change-Id: Ifb8b1885b4dbe8d964f46347c8c510c743af91f4
This reverts commit 122dcef6295e1b62c113476737c29b8b031fbe85.
https://review.opendev.org/c/openstack/openstack-helm-infra/+/805246
The changes from the above patchset is a result of upgrading
Elasticsearch and Kibana images to v7.14. This image has been
reverted back to v7.9.2. As such, these changes are no longer
correct.
Change-Id: I44e9993002cbf1d2c4f5cb23d340b01bad521427
This change adds a condition to ensure that an IP address was
obtained for a ceph-mon kubernetes endpoint before building the
expected endpoint string and checking it against the monmap. If an
IP address isn't available, the check is skipped for that mon.
Change-Id: I45a2e2987b5ef0c27b0bb765f7967fcce1af62e4
As ceph clients expect the ceph_mon config as shown below for Ceph
Nautilus and later releases, this change updates the ceph-client-etc
configmap to reflect the correct mon endpoint specification.
mon_host = [v1:172.29.1.139:6789/0,v2:172.29.1.139:3300/0],
[v1:172.29.1.140:6789/0,v2:172.29.1.140:3300/0],
[v1:172.29.1.145:6789/0,v2:172.29.1.145:3300/0]
Change-Id: Ic3a1cb7e56317a5a5da46f3bf97ee23ece36c99c
The ceph-mon-check pod only knew about the v1 port before, and didn't
have the proper mon_host configuration in its ceph.conf file. This
patchset adds knowledge about the v2 port also and correctly configures
the ceph.conf file. Also fixes a namespace hardcoding that was found
in the last ceph-mon-check fix.
Change-Id: I460e43864a2d4b0683b67ae13bf6429d846173fc
In cases where the pool deletion feature [0] is used, but the pool does
not exists, a pool is created and then subsequently deleted.
This was broken by the performance optimizations introduced with [1], as
the job is trying to delete a pool that does not exist (yet).
This change makes the ceph-rbd-pool job wait for manage_pools to finish
before trying to delete the pool.
0: https://review.opendev.org/c/792851
1: https://review.opendev.org/c/806443
Change-Id: Ibb77e33bed834be25ec7fd215bc448e62075f52a
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I649512e17fc62049fef5b9d5e05c69c0e99635f9
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I7ed4a88fca679b1d27c74f0e260e690093fdf591
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I7d17d2ff4a44fc8d16cc653b33253cce536bfce1
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I7a14e510fb1cfadcf2e124314b52c7cac4ac0af1
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: If27f87fceb79162458f22c07a35fe813b6026830
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: Icc845f0ee15740802e97a4749e7181d6f372e4b2
A race condition exists that can cause the mon-check pod to delete
mons from the monmap that are only down temporarily. This sometimes
causes issues with the monmap when those mons come back up. This
change adds a check to see if the list of mons in the monmap is
larger than expected before removing anything. If not, the monmap
is left alone.
Change-Id: I43b186bf80741fc178c6806d24c179417d7f2406
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I507f6e786b5e35741030c500368638d586c99c12
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: Ia24fadf575dc5230246f3efa32a00fa1e3614abf
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I2264d29cd2dad1bc7636de8247ebec7f611a1f16
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I4cf135dc9852506cc2e853c9faa8544b7b2e2fae
With the move to helm v3, helm status requires a namespace to be specified, but doing so breaks helm v2 compatability. This change removes the usage of helm serve in openstack-helm-infra's deployment scripts.
Change-Id: I21ba5d8ca6f86954c793268142419e0a9e083943