bcec02416e
Another round of typos/spelling errors fixing, in particular for config-reference, HA guide and image guide. Change-Id: Ibc3661e132a0ea4010a5a61f71ef5209ae6655b6 Closes-Bug: #1356970
118 lines
6.4 KiB
XML
118 lines
6.4 KiB
XML
<section xml:id="ceph-rados" xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
|
<title>Ceph RADOS Block Device (RBD)</title>
|
|
<para>If you use KVM or QEMU as your hypervisor, you can configure
|
|
the Compute service to use <link
|
|
xlink:href="http://ceph.com/ceph-storage/block-storage/">
|
|
Ceph RADOS block devices (RBD)</link> for volumes.</para>
|
|
<para>Ceph is a massively scalable, open source, distributed storage system. It is comprised of
|
|
an object store, block store, and a POSIX-compliant distributed file system. The platform
|
|
can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is
|
|
self-healing and self-managing, and has no single point of failure. Ceph is in the Linux
|
|
kernel and is integrated with the OpenStack cloud operating system. Due to its open-source
|
|
nature, you can install and use this portable storage platform in public or private clouds. <figure>
|
|
<title>Ceph architecture</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata fileref="../../../common/figures/ceph/ceph-architecture.png"
|
|
contentwidth="6in"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
</para>
|
|
<simplesect>
|
|
<title>RADOS</title>
|
|
<para>Ceph is based on <emphasis>RADOS: Reliable Autonomic Distributed Object
|
|
Store</emphasis>. RADOS distributes objects across the storage cluster and
|
|
replicates objects for fault tolerance. RADOS contains the following major
|
|
components:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis>Object Storage Device (OSD) Daemon</emphasis>. The storage daemon
|
|
for the RADOS service, which interacts with the OSD (physical or logical storage
|
|
unit for your data).</para>
|
|
<para>You must run this daemon on each server in your cluster. For each OSD, you can
|
|
have an associated hard drive disk. For performance purposes, pool your hard
|
|
drive disk with raid arrays, logical volume management (LVM), or B-tree file
|
|
system (<systemitem>Btrfs</systemitem>) pooling. By default, the following pools
|
|
are created: data, metadata, and RBD.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>Meta-Data Server (MDS)</emphasis>.
|
|
Stores metadata. MDSs build a POSIX file system on
|
|
top of objects for Ceph clients. However, if you
|
|
do not use the Ceph file system, you do not need a
|
|
metadata server.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>Monitor (MON)</emphasis>. A lightweight daemon that handles all
|
|
communications with external applications and clients. It also provides a
|
|
consensus for distributed decision making in a Ceph/RADOS cluster. For instance,
|
|
when you mount a Ceph shared on a client, you point to the address of a MON
|
|
server. It checks the state and the consistency of the data. In an ideal setup,
|
|
you must run at least three <code>ceph-mon</code> daemons on separate
|
|
servers.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>Ceph developers recommend that you use <systemitem>Btrfs</systemitem> as a file system
|
|
for storage. XFS might be a better alternative for production environments;XFS
|
|
is an excellent alternative to Btrfs. The ext4 file system is also compatible but does
|
|
not exploit the power of Ceph.</para>
|
|
<note>
|
|
<para>If using <systemitem>Btrfs</systemitem>, ensure that you use the correct version
|
|
(see <link xlink:href="http://ceph.com/docs/master/start/os-recommendations/.">Ceph
|
|
Dependencies</link>).</para>
|
|
<para>For more information about usable file systems, see <link
|
|
xlink:href="http://ceph.com/ceph-storage/file-system/"
|
|
>ceph.com/ceph-storage/file-system/</link>.</para>
|
|
</note>
|
|
</simplesect>
|
|
<simplesect>
|
|
<title>Ways to store, use, and expose data</title>
|
|
<para>To store and access your data, you can use the following
|
|
storage systems:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis>RADOS</emphasis>. Use as an object,
|
|
default storage mechanism.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>RBD</emphasis>. Use as a block device.
|
|
The Linux kernel RBD (RADOS block device) driver
|
|
allows striping a Linux block device over multiple
|
|
distributed object store data objects. It is
|
|
compatible with the KVM RBD image.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>CephFS</emphasis>. Use as a file,
|
|
POSIX-compliant file system.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>Ceph exposes RADOS; you can access it through the following interfaces:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis>RADOS Gateway</emphasis>. OpenStack Object Storage and Amazon-S3
|
|
compatible RESTful interface (see <link
|
|
xlink:href="http://ceph.com/wiki/RADOS_Gateway"
|
|
>RADOS_Gateway</link>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>librados</emphasis>, and its related C/C++ bindings.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis>RBD and QEMU-RBD</emphasis>. Linux
|
|
kernel and QEMU block devices that stripe data
|
|
across multiple objects.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</simplesect>
|
|
<simplesect>
|
|
<title>Driver options</title>
|
|
<para>The following table contains the configuration options
|
|
supported by the Ceph RADOS Block Device driver.</para>
|
|
<xi:include
|
|
href="../../../common/tables/cinder-storage_ceph.xml"/>
|
|
</simplesect>
|
|
</section>
|