5b2cebfdc4
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e |
||
---|---|---|
actions | ||
files | ||
hooks | ||
templates | ||
tests | ||
unit_tests | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.project | ||
.pydevproject | ||
.testr.conf | ||
actions.yaml | ||
charm-helpers-hooks.yaml | ||
charm-helpers-tests.yaml | ||
config.yaml | ||
copyright | ||
icon.svg | ||
Makefile | ||
metadata.yaml | ||
README.md | ||
requirements.txt | ||
revision | ||
setup.cfg | ||
test-requirements.txt | ||
TODO | ||
tox.ini |
Overview
Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.
This charm deploys additional Ceph OSD storage service units and should be used in conjunction with the 'ceph' charm to scale out the amount of storage available in a Ceph cluster.
Usage
The charm also supports specification of the storage devices to use in the ceph cluster::
osd-devices:
A list of devices that the charm will attempt to detect, initialise and
activate as ceph storage.
This this can be a superset of the actual storage devices presented to
each service unit and can be changed post ceph-osd deployment using
`juju set`.
For example::
ceph-osd:
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Boot things up by using::
juju deploy -n 3 --config ceph.yaml ceph
You can then deploy this charm by simple doing::
juju deploy -n 10 --config ceph.yaml ceph-osd
juju add-relation ceph-osd ceph
Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which will scan for the configured storage devices and add them to the pool of available storage.
Contact Information
Author: James Page james.page@ubuntu.com Report bugs at: http://bugs.launchpad.net/charms/+source/ceph-osd/+filebug Location: http://jujucharms.com/charms/ceph-osd