Adding RBD as a known store in the glance-api.conf file allows us to use
Ceph as a backend for Glance.
Closes-Bug: 1369578
Change-Id: I02cbafa68ca3293cedc9fef7535e79930cc4ee5c
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
CEPH_LOOPBACK_DISK_SIZE_DEFAULT should be more than 2GB
to make volume snapshot feature works. 2GB is not enough
because min Cinder volume size is 1GB and no snapshot
could be created.
This also fixes related Tempest tests and experimental
check-tempest-dsvm-full-ceph gate job.
Change-Id: Ifa41d0d1764d68ea02dcb32a5fc62f7f6282904d
The new lib installs a full Ceph cluster. It can be managed
by the service init scripts. Ceph can also be installed in
standalone without any other components.
This implementation adds the auto-configuration for
the following services with Ceph:
* Glance
* Cinder
* Cinder backup
* Nova
To enable Ceph simply add: ENABLED_SERVICES+=,ceph to your localrc.
If you want to play with the Ceph replication, you can use the
CEPH_REPLICAS option and set a replica. This replica will be used for
every pools (Glance, Cinder, Cinder backup and Nova). The size of the
loopback disk used for Ceph can also be managed thanks to the
CEPH_LOOPBACK_DISK_SIZE option.
Going further pools, users and PGs are configurable as well. The
convention is <SERVICE_NAME_IN_CAPITAL>_CEPH_<OPTION> where services are
GLANCE, CINDER, NOVA, CINDER_BAK. Let's take the example of Cinder:
* CINDER_CEPH_POOL
* CINDER_CEPH_USER
* CINDER_CEPH_POOL_PG
* CINDER_CEPH_POOL_PGP
** Only works on Ubuntu Trusty, Fedora 19/20 or later **
Change-Id: Ifec850ba8e1e5263234ef428669150c76cfdb6ad
Implements: blueprint implement-ceph-backend
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>