ab93e636b8
Brief Summary: Added Modules and Lab Sections of Aptira's Existing OpenStack Training Docs. Please do refer Full Summary for more details. For those who want to review this and save some time on building it, I have hosted the content on http://office.aptira.in Please talk to Sean Robetrs if you are concerened about repetition of Doc Content or similar issues like short URLs etc., this is supposed to be a rough patch and not final. Full Summary: Added the following modules. 1. Module001 - Introduction To OpenStack. - Brief Overview of OpenStack. - Basic Concepts - Detailed Description of Core Projects (Grizzly) under OpenStack. - All But Networking and Swift. 2. Module002 - OpenStack Networking In detail. 3. Module003 - OpenStack Object Storage In detail. 4. Lab001 - OpenStack Control Node and Compute Node. 5. Lab002 - OpenStack Network Node. Full Summary added due to the size of the commit. I Apologize for the size of this commit and will try not to commit huge content like in this patch. The reason for the size of this commit is to meet OpenStack Training Sprint day. bp/training-manuals Change-Id: Ie3c44527992868b4d9571b66cc1c048e558ec669
89 lines
4.8 KiB
XML
89 lines
4.8 KiB
XML
<?xml version="1.0" encoding="utf-8"?>
|
|
<chapter xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
version="5.0"
|
|
xml:id="module003-ch007-cluster-architecture">
|
|
<title>Cluster Arch</title>
|
|
<para><guilabel>Access Tier</guilabel></para>
|
|
<figure>
|
|
<title>Swift Cluster Architecture</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata fileref="figures/image47.png"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>Large-scale deployments segment off an "Access Tier".
|
|
This tier is the “Grand Central” of the Object Storage
|
|
system. It fields incoming API requests from clients and
|
|
moves data in and out of the system. This tier is composed
|
|
of front-end load balancers, ssl- terminators,
|
|
authentication services, and it runs the (distributed)
|
|
brain of the object storage system — the proxy server
|
|
processes.</para>
|
|
<para>Having the access servers in their own tier enables
|
|
read/write access to be scaled out independently of
|
|
storage capacity. For example, if the cluster is on the
|
|
public Internet and requires ssl-termination and has high
|
|
demand for data access, many access servers can be
|
|
provisioned. However, if the cluster is on a private
|
|
network and it is being used primarily for archival
|
|
purposes, fewer access servers are needed.</para>
|
|
<para>As this is an HTTP addressable storage service, a load
|
|
balancer can be incorporated into the access tier.</para>
|
|
<para>Typically, this tier comprises a collection of 1U
|
|
servers. These machines use a moderate amount of RAM and
|
|
are network I/O intensive. As these systems field each
|
|
incoming API request, it is wise to provision them with
|
|
two high-throughput (10GbE) interfaces. One interface is
|
|
used for 'front-end' incoming requests and the other for
|
|
'back-end' access to the object storage nodes to put and
|
|
fetch data.</para>
|
|
<para><guilabel>Factors to Consider</guilabel></para>
|
|
<para>For most publicly facing deployments as well as
|
|
private deployments available across a wide-reaching
|
|
corporate network, SSL will be used to encrypt traffic
|
|
to the client. SSL adds significant processing load to
|
|
establish sessions between clients; more capacity in
|
|
the access layer will need to be provisioned. SSL may
|
|
not be required for private deployments on trusted
|
|
networks.</para>
|
|
<para><guilabel>Storage Nodes</guilabel></para>
|
|
<figure>
|
|
<title>Object Storage (Swift)</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata fileref="figures/image48.png"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>The next component is the storage servers themselves.
|
|
Generally, most configurations should have each of the
|
|
five Zones with an equal amount of storage capacity.
|
|
Storage nodes use a reasonable amount of memory and CPU.
|
|
Metadata needs to be readily available to quickly return
|
|
objects. The object stores run services not only to field
|
|
incoming requests from the Access Tier, but to also run
|
|
replicators, auditors, and reapers. Object stores can be
|
|
provisioned with single gigabit or 10 gigabit network
|
|
interface depending on expected workload and desired
|
|
performance.</para>
|
|
<para>Currently 2TB or 3TB SATA disks deliver good
|
|
price/performance value. Desktop-grade drives can be used
|
|
where there are responsive remote hands in the datacenter,
|
|
and enterprise-grade drives can be used where this is not
|
|
the case.</para>
|
|
<para><guilabel>Factors to Consider</guilabel></para>
|
|
<para>Desired I/O performance for single-threaded requests
|
|
should be kept in mind. This system does not use RAID,
|
|
so each request for an object is handled by a single
|
|
disk. Disk performance impacts single-threaded
|
|
response rates.</para>
|
|
<para>To achieve apparent higher throughput, the object
|
|
storage system is designed with concurrent
|
|
uploads/downloads in mind. The network I/O capacity
|
|
(1GbE, bonded 1GbE pair, or 10GbE) should match your
|
|
desired concurrent throughput needs for reads and
|
|
writes.</para>
|
|
</chapter> |