devstack/lib/cinder_backends/ceph
Sébastien Han 36f2f024db Implement Ceph backend for Glance / Cinder / Nova
The new lib installs a full Ceph cluster. It can be managed
by the service init scripts. Ceph can also be installed in
standalone without any other components.
This implementation adds the auto-configuration for
the following services with Ceph:

* Glance
* Cinder
* Cinder backup
* Nova

To enable Ceph simply add: ENABLED_SERVICES+=,ceph to your localrc.
If you want to play with the Ceph replication, you can use the
CEPH_REPLICAS option and set a replica. This replica will be used for
every pools (Glance, Cinder, Cinder backup and Nova). The size of the
loopback disk used for Ceph can also be managed thanks to the
CEPH_LOOPBACK_DISK_SIZE option.

Going further pools, users and PGs are configurable as well. The
convention is <SERVICE_NAME_IN_CAPITAL>_CEPH_<OPTION> where services are
GLANCE, CINDER, NOVA, CINDER_BAK. Let's take the example of Cinder:

* CINDER_CEPH_POOL
* CINDER_CEPH_USER
* CINDER_CEPH_POOL_PG
* CINDER_CEPH_POOL_PGP

** Only works on Ubuntu Trusty, Fedora 19/20 or later **

Change-Id: Ifec850ba8e1e5263234ef428669150c76cfdb6ad
Implements: blueprint implement-ceph-backend
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
2014-07-23 16:13:45 +02:00

80 lines
2.9 KiB
Plaintext

# lib/cinder_backends/ceph
# Configure the ceph backend
# Enable with:
#
# CINDER_ENABLED_BACKENDS+=,ceph:ceph
#
# Optional parameters:
# CINDER_BAK_CEPH_POOL=<pool-name>
# CINDER_BAK_CEPH_USER=<user>
# CINDER_BAK_CEPH_POOL_PG=<pg-num>
# CINDER_BAK_CEPH_POOL_PGP=<pgp-num>
# Dependencies:
#
# - ``functions`` file
# - ``cinder`` configurations
# configure_ceph_backend_lvm - called from configure_cinder()
# Save trace setting
MY_XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CINDER_BAK_CEPH_POOL=${CINDER_BAK_CEPH_POOL:-backups}
CINDER_BAK_CEPH_POOL_PG=${CINDER_BAK_CEPH_POOL_PG:-8}
CINDER_BAK_CEPH_POOL_PGP=${CINDER_BAK_CEPH_POOL_PGP:-8}
CINDER_BAK_CEPH_USER=${CINDER_BAK_CEPH_USER:-cinder-bak}
# Entry Points
# ------------
# configure_cinder_backend_ceph - Set config files, create data dirs, etc
# configure_cinder_backend_ceph $name
function configure_cinder_backend_ceph {
local be_name=$1
iniset $CINDER_CONF $be_name volume_backend_name $be_name
iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.rbd.RBDDriver"
iniset $CINDER_CONF $be_name rbd_ceph_conf "$CEPH_CONF"
iniset $CINDER_CONF $be_name rbd_pool "$CINDER_CEPH_POOL"
iniset $CINDER_CONF $be_name rbd_user "$CINDER_CEPH_USER"
iniset $CINDER_CONF $be_name rbd_uuid "$CINDER_CEPH_UUID"
iniset $CINDER_CONF $be_name rbd_flatten_volume_from_snapshot False
iniset $CINDER_CONF $be_name rbd_max_clone_depth 5
iniset $CINDER_CONF DEFAULT glance_api_version 2
if is_service_enabled c-bak; then
# Configure Cinder backup service options, ceph pool, ceph user and ceph key
sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${CINDER_BAK_CEPH_POOL} ${CINDER_BAK_CEPH_POOL_PG} ${CINDER_BAK_CEPH_POOL_PGP}
sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_BAK_CEPH_POOL} size ${CEPH_REPLICAS}
if [[ $CEPH_REPLICAS -ne 1 ]]; then
sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_BAK_CEPH_POOL} crush_ruleset ${RULE_ID}
fi
sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_BAK_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_BAK_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_BAK_CEPH_USER}.keyring
sudo chown $(whoami):$(whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_BAK_CEPH_USER}.keyring
iniset $CINDER_CONF DEFAULT backup_driver "cinder.backup.drivers.ceph"
iniset $CINDER_CONF DEFAULT backup_ceph_conf "$CEPH_CONF"
iniset $CINDER_CONF DEFAULT backup_ceph_pool "$CINDER_BAK_CEPH_POOL"
iniset $CINDER_CONF DEFAULT backup_ceph_user "$CINDER_BAK_CEPH_USER"
iniset $CINDER_CONF DEFAULT backup_ceph_stripe_unit 0
iniset $CINDER_CONF DEFAULT backup_ceph_stripe_count 0
iniset $CINDER_CONF DEFAULT restore_discard_excess_bytes True
fi
}
# Restore xtrace
$MY_XTRACE
# Local variables:
# mode: shell-script
# End: