This is to update all ceph daemons startup scripts as per msgr2 protocol and
also to update v2 port for mon_host config.
This also removes setting mon_addr config since we already have mon_host config.
v1 default port: 6789
V2 default port: 3300
Change-Id: I3d95edbd89f5ac8b40a34f41c1099311cee4f875
This is to upgrade ceph version from 14.2.5 from 14.2.7 and also
to update ceph provisioners to use latest code from quay.io
- rbd-provisioner: quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11
- cephfs-provisioner: quay.io/external_storage/cephfs-provisioner:v2.1.0-k8s1.11
This also updates verbs for proivioner's clusterrole to support new code.
Change-Id: Ia94129574610bb5c800a6941804e58ca3aefce65
Validate that the container bucket exist and if so
delete it and its objects that were orphaned from a
a failed deployment helm-tests.
Change-Id: Ibaa6d0f6dd36b319c354b65e43dc6053418f4d1d
This patch set updates and tests the apiVersion for rbac.authorization.k8s.io
from v1beta1 to v1 in preparation for its removal in k8s 1.20.
Change-Id: I4e68db1f75ff72eee55ecec93bd59c68c179c627
Signed-off-by: Tin Lam <tin@irrational.io>
This change updates the Ceph charts to use Ceph Nautilus images
built on Ubuntu Bionic instead of Xenial. The mirror that hosts
Ceph packages only provides Nautilus packages for Bionic at
present, so this is necessary for Nautilus deployment.
There are also several configuration and scripting changes
included to provide compatibility with Ceph Nautilus. Most of
these simply allow existing logic to execute for Nautilus
deployments, but some logical changes are required to support
Nautilus as well.
NOTE: The cephfs test has been disabled because it was failing
the gate. This test has passed in multiple dev environments, and
since cephfs isn't used by any openstack-helm-infra components we
don't want this to block getting this change merged. The gate
issue will be investigated and addressed in a subsequent patch
set.
Change-Id: Id2d9d7b35d4dc66e93a0aacc9ea514e85ae13467
Currently using envsubst to perform substitution of value overrides in
the feature gate caused conflicts as gotpl gets templated into those
overrides. This adds in '%%%REPLACE_${var}%%%' and uses sed to perform
the substitution instead to address the issue.
Change-Id: I9d3d630b53a2f3d828866229a5072bb04440ae15
Signed-off-by: Tin Lam <tin@irrational.io>
This patch set places logic to generate kubernetes egress network policy
rule based on the dependencies specified in values.yaml. This also sets
up the necessary default network policy for the OSH gate.
Change-Id: I1ac649cc9debb5d1f4ea0a32f506dcda4d8b8536
Signed-off-by: Tin Lam <tin@irrational.io>
This updates charts that consume images built from osh-images to
use tags other than the :latest tags. This will be followed up
with the definition of jobs to allow for vetting out of updated
images, as reliance on :latest tags assumes any change merged into
osh-images will result in functionally correct behavior (which has
shown to not be the case traditionally)
Change-Id: I181aa56ed187604dc7583d8081e53cc69eb27310
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This updates the kubernetes-entrypoint image reference to consume
the publicly available kubernetes-entrypoint image that is built
and maintained under the airshipit namespace, as the stackanetes
image is no longer actively maintained
Change-Id: I5bfdc156ae228ab16da57569ac6b05a9a125cb6a
Signed-off-by: Steve Wilkerson <sw5822@att.com>
The PS updates helm tests for Ceph-RGW and Ceph-provisioners:
- Checking several randomly generated objects instead of one static object.
- Improved the output of the tests.
Change-Id: I0733d7c47a2a8bdf30b0d6a97c1a0331eb5030c8
The PS allows to run the tests when both options (rgw_ks and rgw_s3)
are enabled at the same time.
Change-Id: I262baa38b7c65ff9335a3db6a6e2a454c3ff3f5f
This is to fix the issue we are facing with file permision on the file
/var/lib/ceph/bootstrap-rgw/ceph.keyring since owner of the file
will be root.
This is happening when node with rgw reboots and rgw pods fails at
init after reboot,this is happening on sinlge node deplyoments.
issue:
ceph-rgw-5db485fbd9-dv778 0/1 Init:CrashLoopBackOff 5 6m49s
logs:
+ chown -R ceph. /run/ceph/ /var/lib/ceph/bootstrap-rgw /var/lib/ceph/radosgw
/var/lib/ceph/tmp
chown: changing ownership of
'/var/lib/ceph/bootstrap-rgw/ceph.keyring': Operation not permitted
Change-Id: Idcb648c205053b2f03357b59173e70e02f28688c
This updates the helm version from 2.13.1 to 2.14.1
Change-Id: I619351d846253bf17caa922ad7f7b0ff19c778a2
Signed-off-by: Steve Wilkerson <sw5822@att.com>
We now have a process for OSH-images image building,
using Zuul, so we should point the images by default to those
images, instead of pointing to stale images.
Without this, the osh-images build process is completely not
in use (and completely opaque to deployers), and updating the
osh-images process or patching its code has no impact on OSH.
This should fix it.
Change-Id: Ic00bd98c151669dc2485cd88e0e8c2ab05445959
This ps exposes the anti-affinity weight value, including
default, that will be consumed by the updated htk function.
Change-Id: Id8eb303674764ef8b0664f62040723aaf77e0a54
This updates the ceph-rgw chart to include the pod
security context on the pod template
This also adds the container security context
Change-Id: Ic75a1decfe156e1e8aa2ebe38238f6b77abb71f8
This PS updates the ceph charts to make /etc/ceph an emptydir
uniformly across all charts, both ensuring no default config is loaded,
and also permitting read-only filesystems to back the containers.
Additionally /run is uniformly applied across all long running pods
as a memory backed emptydir.
Change-Id: I00d1b15758b7eb4476fb950ddcb38db9a5149ad0
Signed-off-by: Pete Birley <pete@port.direct>
This PS adds emptydirs backing the /tmp directory in pods, which
is required in most cases for full operation when using a read only
filesystem backing the container.
Additionally some yaml indent issues are resolved.
Change-Id: I8b7f1614da059783254aa6efc09facf23fca3cad
Signed-off-by: Pete Birley <pete@port.direct>
This addresses slight issues with the ceph-osd, ceph-provisioners,
and ceph-rgw charts. Those issues include:
- Remove duplicate test: key in ceph-osd dependencies
- Add missing image repo sync job to ceph-provisioner and rgw
- Use correct job name for image repo sync dependencies in charts
- Remove incorrect keystone service dependency for ceph-rgw, as
the keystone jobs are dependent on the keystone service
This also updates the ceph-rgw chart to use dynamic dependencies
based on whether keystone auth or s3 auth is used
Change-Id: Id3b3f289bdd4ca4d1b2e9b6267b12427e422a08d
This adds the release-annotation to the pod spec for the charts in
openstack-helm-infra. This also adds missing configmap annotations
to charts in openstack-helm-infra
Change-Id: Ie23f0c16a7a21d3929e98928db2bbcef69ae6490
Currently both 'deployment:rgw_keystone_user_and_endpoints`
and 'conf: rgw_ks' are used and set to true to deploy
ceph-rgw with keystone integration.
Going forward, we should only use `conf: rgw_ks: enabled: true`
to deploy ceph-rgw with keystone integration.
Change-Id: I17aecd4f977ed897bb0771edc9acafd4479777d1
Remove overrides that are already set or raised higher in the
Mimic release of Ceph for RGW.
rgw_thread_pool_size is now by default using 512
objecter_inflight_ops is now also set to 24576 by default for RGW
Change-Id: I982f6bc08954864afa5ad29923707e1bf64ba9fa
Currently there is a bug in the beast code that makes it fail
during the initial lookup for a keystone user map. For the time
being we will continue to use civetweb when keystone is present
until this issue is resolved.
Change-Id: I56bcd77f38adb3763d35f46443c1403816d1dcea
This updates the helm-toolkit script for creating rgw s3 users
to first check if a user exists, then create the user if it does
not exist or modify the user's keys if it does exist. This is
accomplished by using jq to identify all existing access keys for
the specified user, removing those key pairs using the access key,
then modifies the existing user with the supplied access/secret
key pair for the given user
This also updates the ceph-rgw chart to use the helm-toolkit s3
user script for creating the admin s3 user instead of using a
similar script defined directly in the ceph-rgw chart
Change-Id: I575b66415d44db7bb752102e45595305d86e623b
- Since the admin key has been removed, we need to also replace
radosrgw-admin with openstack container commands.
- Additionally expand the helm tests for keystone to also upload
and validate an object in RGW (similiar to S3 helm tests).
Change-Id: I4be603121fc227dd48f83704e99bba94341c4c09
This changes the application label for the ceph-rgw storage init
job to 'ceph' to match the other jobs defined for the chart, rather
than use 'ceph-rgw'
Change-Id: Ia0b679567161e91241250f0c250d24a45c5ebb92
- Support using custom client params for S3 configurations
- Move common tuning for S3 and Keystone into there own
configuration option
- Cleanup the rgw helm tests, since copying the ceph admin key is
no longer required
- Cleanup duplicate portions of the code for configuring the RGW
backend and frontend port
- Add an rgw helm test check for the osh-infra-logging gates
Change-Id: I46dbb4c45b0b96f5cf555077e49d2e09a1171424
This PS udpates the default image in the chart to the latest OSH image.
Change-Id: Ib8d2a72ad48049fe02560dc4405f0088890b6f64
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the helm test driven pod template:
* places rgw keystone conditional to correct location
* removes unrequired roles and bindings
* adds dependency on the rgw being running
* corrects spelling error
* corrects s3cmd to work with version 1.6.1
Change-Id: I665dba9fdca1d840f4d864e32f07b6185af51d25
Signed-off-by: Pete Birley <pete@port.direct>
Use the Beast backend only when Mimic binaries are installed.
Otherwise use civitweb if the binares are from Ceph Luminous.
Change-Id: Ia7cb64d8db7eed2fc0c57387b26a27163af34520
Change the release of Ceph from 12.2.3 (Luminous) to latest 13.2.2
(Mimic). Additionally use supported RHEL/Centos Images rather then
Ubuntu images, which are now considered deprecated by Redhat.
- Uplift all Ceph images to the latest 13.2.2 ceph-container images.
- RadosGW by default will now use the Beast backend.
- RadosGW has relaxed settings enabled for S3 naming conventions.
- Increased RadosGW resource limits due to backend change.
- All Luminous specific tests now test for both Luminous/Mimic.
- Gate scripts will remove all none required ceph packages. This is
required to not conflict with the pid/gid that the Redhat container
uses.
Change-Id: I9c00f3baa6c427e6223596ade95c65c331e763fb
Set rgw_override_bucket_index_max_shards to 8 (default: 0)
By default create 8 shards per a bucket with Ceph RagosGW. This allows
up to ~800k-1M objects to be in a bucket before seeing performance slow-
downs. The only downside to this change is that a directory listing for
a bucket may take slightly longer to finish.
Change-Id: I96c7ac81501a41d29927e102a6029bf432bd3d21
This PS implements the helm toolkit function to generate the
Egress in kubernetes network policy manifest based on overrideable values.
It also enbale the K8s network policy at Osh-infra gate.
Change-Id: Icbe2a18c98dba795d15398dcdcac64228f6a7b4c