Files
openstack-armada-app/openstack-helm/debian/deb_folder/patches
kgoncalv 75278a856c Allow Rook ceph auto-estimate pg's per pool
This change aims to allow the rook ceph auto-estimation of pg's per
pool.

Placement groups (PGs) are subsets of each logical Ceph pool.
Placement groups perform the function of placing objects
(as a group)into OSDs.Ceph manages data internally at
placement-group granularity: this scales better than would managing individual RADOS objects.
It fixes an issue when using a simplex environment with only 1 osd
it raises a warning alarm where there is too many pg's per pool,
being 250 the limit and when applied there's more than this limit
recommended.

Rook ceph is using the reef version which doesn't require to specify
the pg num at the pool creation[1] while in nautilus version the pg num
specification is a must at the pool creation[2].

Thinking about the different approaches between those versions,
a retro compatibility is also being implemented to preserve the pool
creation in both versions, the self estimate function is preserved.

comparison:

simplex rook ceph without the patch:
POOL                   PG_NUM
.mgr                   32
kube-cephfs-metadata   16
kube-rbd               32
kube-cephfs-data       32
images                 64
cinder.backups         64
cinder-volumes         64

simplex rook ceph with the patch:
POOL                   PG_NUM
kube-rbd               32
.mgr                    1
kube-cephfs-metadata   16
kube-cephfs-data       32
images                 32
cinder.backups         32
cinder-volumes         32

simplex host ceph before and after the patch:
dumped pgs_brief
kube-rbd (1): 64 PGs
dumped pgs_brief
kube-cephfs-data (2): 64 PGs
dumped pgs_brief
kube-cephfs-metadata (3): 64 PGs
dumped pgs_brief
images (4): 64 PGs
dumped pgs_brief
cinder.backups (5): 64 PGs
dumped pgs_brief
cinder-volumes (6): 64 PGs

Standard rook ceph without the patch:
POOL                   PG_NUM
kube-rbd               32
.mgr                    1
kube-cephfs-metadata   16
kube-cephfs-data       32
images                 32
cinder.backups         32
cinder-volumes         32

standard rook ceph with the patch:
POOL                   PG_NUM
kube-rbd               32
.mgr                    1
kube-cephfs-metadata   16
kube-cephfs-data       32
images                 32
cinder.backups         32
cinder-volumes         32

Test plan:
simplex:
  rook-ceph:
    PASS - build openstack
    PASS - apply openstack
    PASS - create vm's
    PASS - ping between vm's
    PASS - volume creation/backup creation
    PASS - validate alarm for total pg's
  host-ceph:
    PASS - build openstack
    PASS - apply openstack
    PASS - create vm's
    PASS - ping between vm's
    PASS - volume creation/backup creation
    PASS - validate alarm for total pg's
standard:
  rook-ceph:
    PASS - build openstack
    PASS - apply openstack
    PASS - create vm's
    PASS - ping between vm's
    PASS - volume creation/backup creation
    PASS - validate alarm for total pg's
  host-ceph:
    PASS - build openstack
    PASS - apply openstack
    PASS - create vm's
    PASS - ping between vm's
    PASS - volume creation/backup creation
    PASS - validate alarm for total pg's
miscellaneous:
    PASS - change pool pg_num through user-overrides

References:
[1] - https://docs.ceph.com/en/reef/rados/operations/placement-groups/#preselecting-pg-num
[2] - https://docs.ceph.com/en/nautilus/rados/operations/placement-groups/#a-preselection-of-pg-num

Closes-Bug: 2122620

Change-Id: I018f7302328c3789864d7f7875fe7d2b4b31f7ee
Signed-off-by: kgoncalv <kayo.goncalvesdacosta@windriver.com>
2025-09-29 14:23:31 +00:00
..