d3d86bde80
The cephfs driver has support for multiple cephfs fileystems on cluster. But the capacity information for each cephfs fileystems is directly related to the amount of storage space available on the cluster. Thus the capacity for each cephfs filesystem is the same. When you configure multiple backends against the same cluster, the scheduler will treat both backends equally, which will result in one backend getting all of the share creation requests and the other backend getting none. To fix this have the driver report the allocated_capacity_gb along with the provisioned_capacity_gb, which is the sum of the shares assigned to that backend. Then if you enable thin_provisioning and thin_logic the capacity weigher will use the least allocated cephfs backend as a higher candidate for share creates. Which will result in share creations being split between the cephfs backends. This information is now being cached, and a new configuration option named ``cephfs_cached_allocated_capacity_update_interval`` was added to the driver, so that OpenStack operators can define how long they would. like for this information to be persisted. It defaults to 60 seconds. Co-Authored-By: Carlos da Silva <ces.eduardo98@gmail.com> Closes-Bug: #2049538 Change-Id: I141a5db9cf66327f68f1fa4c7f2bb72135171e43