Merge "Remove defunct swift content"

This commit is contained in:
Jenkins 2014-12-25 12:40:38 +00:00 committed by Gerrit Code Review
commit 69ef4c94d5
3 changed files with 0 additions and 266 deletions

@ -1,54 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="adding-proxy-server">
<title>Add another proxy server</title>
<para>To provide additional reliability and bandwidth
to your cluster, you can add proxy servers. You can
set up an additional proxy node the same way
that you set up the first proxy node but with
additional configuration steps.</para>
<para>After you have more than two proxies, you must
load balance them; your storage endpoint (what
clients use to connect to your storage) also
changes. You can select from different
strategies for load balancing. For example,
you could use round-robin DNS, or a software
or hardware load balancer (like pound) in
front of the two proxies. You can then point your
storage URL to the load balancer, configure an initial
proxy node and complete these steps to add proxy
servers.</para>
<procedure>
<step>
<para>Update the list of memcache
servers in the
<filename>/etc/swift/proxy-server.conf</filename>
file for added proxy servers. If
you run multiple memcache servers,
use this pattern for the multiple
IP:port listings in each proxy
server configuration file:</para>
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>
<literallayout class="monospaced">[filter:cache]
use = egg:swift#memcache
memcache_servers = <replaceable>PROXY_LOCAL_NET_IP</replaceable>:11211</literallayout>
</step>
<step>
<para>Copy ring information to all
nodes, including new proxy nodes.
Also, ensure that the ring
information gets to all storage
nodes.</para>
</step>
<step>
<para>After you sync all nodes, make
sure that the admin has keys in
<filename>/etc/swift</filename> and
the ownership for the ring file is
correct.</para>
</step>
</procedure>
</section>

@ -1,129 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="installing-openstack-object-storage">
<title>Install Object Storage</title>
<para>Though you can install OpenStack Object Storage for development or
testing purposes on one server, a multiple-server installation enables
the high availability and redundancy you want in a production
distributed object storage system.</para>
<para>To perform a single-node installation for development purposes from
source code, use the Swift All In One instructions (Ubuntu) or DevStack
(multiple distros). See <link
xlink:href="http://swift.openstack.org/development_saio.html"
>http://swift.openstack.org/development_saio.html</link> for manual
instructions or <link xlink:href="http://devstack.org"
>http://devstack.org</link> for all-in-one including authentication
with the Identity Service (keystone) v2.0 API.</para>
<section xml:id="before-you-begin-swift-install">
<title>Before you begin</title>
<para>Have a copy of the operating system installation media available
if you are installing on a new server.</para>
<para>These steps assume you have set up repositories for packages for
your operating system as shown in
<link linkend="basics-packages"/>.</para>
<para>This document demonstrates how to install a cluster by using the
following types of nodes:</para>
<itemizedlist>
<listitem>
<para>One proxy node which runs the
<systemitem class="service">swift-proxy-server</systemitem>
processes. The proxy server proxies requests to the
appropriate storage nodes.</para>
</listitem>
<listitem>
<para>
Five storage nodes that run the <systemitem
class="service">swift-account-server</systemitem>,
<systemitem
class="service">swift-container-server</systemitem>,
and <systemitem
class="service">swift-object-server</systemitem>
processes which control storage of the account
databases, the container databases, as well as the
actual stored objects.</para>
</listitem>
</itemizedlist>
<note>
<para>Fewer storage nodes can be used initially, but a minimum of
five is recommended for a production cluster.</para>
</note>
</section>
<section xml:id="general-installation-steps-swift">
<title>General installation steps</title>
<procedure>
<step os="rhel;centos;fedora;opensuse;sles;ubuntu">
<para>Create a <literal>swift</literal> user that the Object
Storage Service can use to authenticate with the Identity
Service. Choose a password and specify an email address for
the <literal>swift</literal> user. Use the
<literal>service</literal> tenant and give the user the
<literal>admin</literal> role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name swift --pass <replaceable>SWIFT_PASS</replaceable></userinput>
<prompt>$</prompt> <userinput>keystone user-role-add --user swift --tenant service --role admin</userinput></screen>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with a
suitable password.</para>
</step>
<step>
<para>Create a service entry for the Object Storage
Service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name swift --type object-store \
--description "OpenStack Object Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| id | eede9296683e4b5ebfa13f5166375ef6 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+</computeroutput></screen>
<note>
<para>The service ID is randomly generated and is different
from the one shown here.</para>
</note>
</step>
<step>
<para>Specify an API endpoint for the Object Storage Service by
using the returned service ID. When you specify an endpoint,
you provide URLs for the public API, internal API, and admin
API. In this guide, the <literal>controller</literal> host
name is used:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://<replaceable>controller</replaceable>:8080 \
--region regionOne</userinput>
<computeroutput>+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://controller:8080/ |
| id | 9e3ce428f82b40d38922f242c095982e |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region | regionOne |
| service_id | eede9296683e4b5ebfa13f5166375ef6 |
+-------------+---------------------------------------------------+</computeroutput></screen>
</step>
<step>
<para>Create the configuration directory on all nodes:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /etc/swift</userinput></screen>
</step>
<step>
<para>Create <filename>/etc/swift/swift.conf</filename> on all
nodes:</para>
<programlisting language="ini"><xi:include parse="text" href="../samples/swift.conf.txt"/></programlisting>
</step>
</procedure>
<note>
<para>The prefix and suffix value in <filename>/etc/swift/swift.conf</filename>
should be set to some random string of text to be used as a salt
when hashing to determine mappings in the ring. This file must
be the same on every node in the cluster!</para>
</note>
<para>Next, set up your storage nodes and proxy node. This example uses
the Identity Service for the common authentication piece.</para>
</section>
</section>

@ -1,83 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="object-storage-network-planning">
<title>Plan networking for Object Storage</title>
<para>For both conserving network resources and ensuring that
network administrators understand the needs for networks and
public IP addresses for providing access to the APIs and storage
network as necessary, this section offers recommendations and
required minimum sizes. Throughput of at least 1000 Mbps is
suggested.</para>
<para>This guide describes the following networks:<itemizedlist>
<listitem>
<para>A mandatory public network. Connects to the proxy
server.</para>
</listitem>
<listitem>
<para>A mandatory storage network. Not accessible from outside
the cluster. All nodes connect to this network.</para>
</listitem>
<listitem>
<para>An optional replication network. Not accessible from
outside the cluster. Dedicated to replication traffic among
storage nodes. Must be configured in the Ring.</para>
</listitem>
</itemizedlist></para>
<para>This figure shows the basic architecture for the public
network, the storage network, and the optional replication
network.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata
fileref="../figures/networking-interactions-swift.png"
/>
</imageobject>
</inlinemediaobject></para>
<para>By default, all of the OpenStack Object Storage services, as
well as the rsync daemon on the storage nodes, are configured to
listen on their <literal>STORAGE_LOCAL_NET</literal> IP
addresses.</para>
<para>If you configure a replication network in the Ring, the
Account, Container and Object servers listen on both the
<literal>STORAGE_LOCAL_NET</literal> and
<literal>STORAGE_REPLICATION_NET</literal> IP addresses. The
rsync daemon only listens on the
<literal>STORAGE_REPLICATION_NET</literal> IP address.</para>
<variablelist>
<varlistentry>
<term>Public Network (Publicly routable IP range)</term>
<listitem>
<para>Provides public IP accessibility to the API endpoints
within the cloud infrastructure.</para>
<para>Minimum size: one IP address for each proxy
server.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Storage Network (RFC1918 IP Range, not publicly
routable)</term>
<listitem>
<para>Manages all inter-server communications within the
Object Storage infrastructure.</para>
<para>Minimum size: one IP address for each storage node and
proxy server.</para>
<para>Recommended size: as above, with room for expansion to
the largest your cluster size. For example, 255 or CIDR
/24.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Replication Network (RFC1918 IP Range, not publicly
routable)</term>
<listitem>
<para>Manages replication-related communications among storage
servers within the Object Storage infrastructure.</para>
<para>Recommended size: as for
<literal>STORAGE_LOCAL_NET</literal>.</para>
</listitem>
</varlistentry>
</variablelist>
</section>