openstack-manuals/doc/install-guide/section_object-storage-howto-install-multinode.xml
Diane Fleming 64b6c9261e Folder rename, file rename, flattening of directories
Current folder name	New folder name	        Book title
----------------------------------------------------------
basic-install 	        DELETE
cli-guide	        DELETE
common	                common
NEW	                admin-guide-cloud	Cloud Administrators Guide
docbkx-example	        DELETE
openstack-block-storage-admin 	DELETE
openstack-compute-admin 	DELETE
openstack-config 	config-reference	OpenStack Configuration Reference
openstack-ha 	        high-availability-guide	OpenStack High Availabilty Guide
openstack-image	        image-guide	OpenStack Virtual Machine Image Guide
openstack-install 	install-guide	OpenStack Installation Guide
openstack-network-connectivity-admin 	admin-guide-network 	OpenStack Networking Administration Guide
openstack-object-storage-admin 	DELETE
openstack-security 	security-guide	OpenStack Security Guide
openstack-training 	training-guide	OpenStack Training Guide
openstack-user 	        user-guide	OpenStack End User Guide
openstack-user-admin 	user-guide-admin	OpenStack Admin User Guide
glossary	        NEW        	OpenStack Glossary

bug: #1220407

Change-Id: Id5ffc774b966ba7b9a591743a877aa10ab3094c7
author: diane fleming
2013-09-08 15:15:50 -07:00

679 lines
21 KiB
XML

<?xml version="1.0"?>
<!-- Converted by db4-upgrade version 1.0 -->
<section xmlns="http://docbook.org/ns/docbook" version="5.0-extension RaxBook-1.0" xml:id="instructions-for-a-multiple-server-swift-installation-ubuntu"><info><title>Instructions for a Multiple Server Swift Installation
(Ubuntu)</title></info>
<section xml:id="prerequisites"><info><title>Prerequisites</title></info>
<itemizedlist>
<listitem>
<para>
Ubuntu Server 10.04 LTS installation media
</para>
</listitem>
</itemizedlist>
<note><para>Swift can run with other distros, but for this document we
will focus on installing on Ubuntu Server, ypmv (your packaging
may vary).</para></note>
<para>
Basic architecture and terms
</para>
<screen>
----------------------------
- *node* - a host machine running one or more Swift services
- *Proxy node* - node that runs Proxy services; also runs TempAuth
- *Storage node* - node that runs Account, Container, and Object services
- *ring* - a set of mappings of Swift data to physical devices
</screen>
<para>
This document shows a cluster using the following types of nodes:
</para>
<itemizedlist>
<listitem>
<para>
one Proxy node
</para>
<itemizedlist>
<listitem>
<para>
Runs the swift-proxy-server processes which proxy requests
to the appropriate Storage nodes. The proxy server will
also contain the TempAuth service as WSGI middleware.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
five Storage nodes
</para>
<itemizedlist>
<listitem>
<para>
Runs the swift-account-server, swift-container-server, and
swift-object-server processes which control storage of the
account databases, the container databases, as well as the
actual stored objects.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<note><para>Fewer Storage nodes can be used initially, but a minimum of 5 is recommended for a production cluster.</para></note>
<para>
This document describes each Storage node as a separate zone in
the ring. It is recommended to have a minimum of 5 zones. A zone
is a group of nodes that is as isolated as possible from other
nodes (separate servers, network, power, even geography). The ring
guarantees that every replica is stored in a separate zone. For
more information about the ring and zones, see: <link linkend="the-ring">The Ring</link>.
</para>
<para>
To increase reliability, you may want to add additional Proxy
servers for performance which is described in
<link linkend="adding-a-proxy-server">Adding a Proxy Server</link>.
</para>
</section>
<section xml:id="network-setup-notes"><info><title>Network Setup Notes</title></info>
<para>
This document refers to two networks. An external network for
connecting to the Proxy server, and a storage network that is not
accessible from outside the cluster, to which all of the nodes
are connected. All of the Swift services, as well as the rsync
daemon on the Storage nodes are configured to listen on their
STORAGE_LOCAL_NET IP addresses.
</para>
<note><para>Run all commands as the root user</para></note>
</section>
<section xml:id="general-os-configuration-and-partitioning-for-each-node"><info><title>General OS configuration and partitioning for each
node</title></info>
<orderedlist>
<listitem>
<para>
Install the baseline Ubuntu Server 10.04 LTS on all nodes.
</para>
</listitem>
<listitem>
<para>
Install common Swift software prereqs:
</para>
<screen>
apt-get install python-software-properties
add-apt-repository ppa:swift-core/release
apt-get update
apt-get install swift openssh-server
</screen>
</listitem>
<listitem>
<para>
Create and populate configuration directories:
</para>
<screen>
mkdir -p /etc/swift
chown -R swift:swift /etc/swift/
</screen>
</listitem>
<listitem>
<para>
On the first node only, create /etc/swift/swift.conf:
</para>
<screen>
cat &gt;/etc/swift/swift.conf &lt;&lt;EOF
[swift-hash]
# random unique string that can never change (DO NOT LOSE)
swift_hash_path_suffix = `od -t x8 -N 8 -A n &lt;/dev/random`
EOF
</screen>
</listitem>
<listitem>
<para>
On the second and subsequent nodes: Copy that file over. It
must be the same on every node in the cluster!:
</para>
<screen>
scp firstnode.example.com:/etc/swift/swift.conf /etc/swift/
</screen>
</listitem>
<listitem>
<para>
Publish the local network IP address for use by scripts found
later in this documentation:
</para>
<screen>
export STORAGE_LOCAL_NET_IP=10.1.2.3
export PROXY_LOCAL_NET_IP=10.1.2.4
</screen>
</listitem>
</orderedlist>
<note><para>The random string of text in /etc/swift/swift.conf is used as a salt when hashing to determine mappings in the ring.</para></note>
</section>
<section xml:id="configure-the-proxy-node"><info><title>Configure the Proxy node</title></info>
<note><para>It is assumed that all commands are run as the root user</para></note>
<orderedlist>
<listitem>
<para>
Install swift-proxy service:
</para>
<screen>
apt-get install swift-proxy memcached
</screen>
</listitem>
<listitem>
<para>
Create self-signed cert for SSL:
</para>
<screen>
cd /etc/swift
openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
</screen>
</listitem>
</orderedlist>
<note><para>If you don't create the cert files, Swift silently uses http internally rather than https. This document assumes that you have created these certs, so if you're following along step-by-step, create them.</para></note>
<orderedlist>
<listitem>
<para>
Modify memcached to listen on the default interfaces.
Preferably this should be on a local, non-public network. Edit
the IP address in /etc/memcached.conf, for example:
</para>
<screen>
perl -pi -e "s/-l 127.0.0.1/-l $PROXY_LOCAL_NET_IP/" /etc/memcached.conf
</screen>
</listitem>
<listitem>
<para>
Restart the memcached server:
</para>
<screen>
service memcached restart
</screen>
</listitem>
<listitem>
<para>
Create /etc/swift/proxy-server.conf:
</para>
<screen>
cat &gt;/etc/swift/proxy-server.conf &lt;&lt;EOF
[DEFAULT]
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:tempauth]
use = egg:swift#tempauth
user_system_root = testpass .admin https://$PROXY_LOCAL_NET_IP:8080/v1/AUTH_system
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = $PROXY_LOCAL_NET_IP:11211
EOF
</screen>
</listitem>
</orderedlist>
<note><title>Note</title><para>
If you run multiple memcache servers, put the multiple IP:port listings
in the [filter:cache] section of the <filename>proxy-server.conf</filename> file like:
`10.1.2.3:11211,10.1.2.4:11211`. Only the proxy server uses memcache.
</para></note>
<note><title>Note</title><para>
The memcache_servers variable can also be set in a separate file:
<filename>/etc/swift/memcache.conf</filename>. If it is set in both places, the value
in <filename>proxy-server.conf</filename> will override the one in <filename>memcache.conf</filename>.
</para></note>
<orderedlist>
<listitem>
<para>
Create the account, container and object rings. The builder
command is basically creating a builder file with a few
parameters. The parameter with the value of 18 represents 2 ^
18th, the value that the partition will be sized to. Set this
"partition power" value based on the total amount of
storage you expect your entire ring to use. The value of 3
represents the number of replicas of each object, with the
last value being the number of hours to restrict moving a
partition more than once.
</para>
<screen>
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
</screen>
</listitem>
</orderedlist>
<orderedlist>
<listitem>
<para>
For every storage device in /srv/node on each node add entries
to each ring:
</para>
<screen>
export ZONE= # set the zone number for that storage device
export STORAGE_LOCAL_NET_IP= # and the IP address
export WEIGHT=100 # relative weight (higher for bigger/faster disks)
export DEVICE=sdb1
swift-ring-builder account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT
swift-ring-builder container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT
swift-ring-builder object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT
</screen>
</listitem>
</orderedlist>
<note><para>Assuming there are 5 zones with 1 node per zone, ZONE should start at 1 and increment by one for each additional node.</para></note>
<orderedlist>
<listitem>
<para>
Verify the ring contents for each ring:
</para>
<screen>
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
</screen>
</listitem>
<listitem>
<para>
Rebalance the rings:
</para>
<screen>
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
</screen>
</listitem>
</orderedlist>
<note><para>Rebalancing rings can take some time.</para></note>
<orderedlist>
<listitem>
<para>
Copy the account.ring.gz, container.ring.gz, and
object.ring.gz files to each of the Proxy and Storage nodes in
/etc/swift.
</para>
</listitem>
<listitem>
<para>
Make sure all the config files are owned by the swift user:
</para>
<screen>
chown -R swift:swift /etc/swift
</screen>
</listitem>
<listitem>
<para>
Start Proxy services:
</para>
<screen>
swift-init proxy start
</screen>
</listitem>
</orderedlist>
</section>
<section xml:id="configure-the-storage-nodes"><info><title>Configure the Storage nodes</title></info>
<note><para>Swift should work on any modern filesystem that supports Extended Attributes (XATTRS). We currently recommend XFS as it demonstrated the best overall performance for the swift use case after considerable testing and benchmarking at Rackspace. It is also the only filesystem that has been thoroughly tested. These instructions assume that you are going to devote /dev/sdb1 to an XFS filesystem.</para></note>
<orderedlist>
<listitem>
<para>
Install Storage node packages:
</para>
<screen>
apt-get install swift-account swift-container swift-object xfsprogs
</screen>
</listitem>
<listitem>
<para>
For every device on the node, setup the XFS volume (/dev/sdb
is used as an example):
</para>
<screen>
fdisk /dev/sdb (set up a single partition)
mkfs.xfs -i size=1024 /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" &gt;&gt; /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node
</screen>
</listitem>
<listitem>
<para>
Create /etc/rsyncd.conf:
</para>
<screen>
cat &gt;/etc/rsyncd.conf &lt;&lt;EOF
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $STORAGE_LOCAL_NET_IP
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
EOF
</screen>
</listitem>
<listitem>
<para>
Edit the RSYNC_ENABLE= line in /etc/default/rsync:
</para>
<screen>
perl -pi -e 's/RSYNC_ENABLE=false/RSYNC_ENABLE=true/' /etc/default/rsync
</screen>
</listitem>
<listitem>
<para>
Start rsync daemon:
</para>
<screen>
service rsync start
</screen>
</listitem>
</orderedlist>
<note><para>The rsync daemon requires no authentication, so it should be run on a local, private network.</para></note>
<orderedlist>
<listitem>
<para>
Create /etc/swift/account-server.conf:
</para>
<screen>
cat &gt;/etc/swift/account-server.conf &lt;&lt;EOF
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
EOF
</screen>
</listitem>
<listitem>
<para>
Create /etc/swift/container-server.conf:
</para>
<screen>
cat &gt;/etc/swift/container-server.conf &lt;&lt;EOF
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
EOF
</screen>
</listitem>
<listitem>
<para>
Create /etc/swift/object-server.conf:
</para>
<screen>
cat &gt;/etc/swift/object-server.conf &lt;&lt;EOF
[DEFAULT]
bind_ip = $STORAGE_LOCAL_NET_IP
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
EOF
</screen>
</listitem>
<listitem>
<para>
Start the storage services. If you use this command, it will
try to start every service for which a configuration file
exists, and throw a warning for any configuration files which
don't exist:
</para>
<screen>
swift-init all start
</screen>
<para>
Or, if you want to start them one at a time, run them as
below. Note that if the server program in question generates
any output on its stdout or stderr, swift-init has already
redirected the command's output to /dev/null. If you encounter
any difficulty, stop the server and run it by hand from the
command line. Any server may be started using
"swift-$SERVER-$SERVICE /etc/swift/$SERVER-config",
where $SERVER might be object, continer, or account, and
$SERVICE might be server, replicator, updater, or auditor.
</para>
<screen>
swift-init object-server start
swift-init object-replicator start
swift-init object-updater start
swift-init object-auditor start
swift-init container-server start
swift-init container-replicator start
swift-init container-updater start
swift-init container-auditor start
swift-init account-server start
swift-init account-replicator start
swift-init account-auditor start
</screen>
</listitem>
</orderedlist>
</section>
<section xml:id="create-swift-admin-account-and-test"><info><title>Create Swift admin account and test</title></info>
<para>
You run these commands from the Proxy node.
</para>
<orderedlist>
<listitem>
<para>
Get an X-Storage-Url and X-Auth-Token:
</para>
<screen>
curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0
</screen>
</listitem>
<listitem>
<para>
Check that you can HEAD the account:
</para>
<screen>
curl -k -v -H 'X-Auth-Token: &lt;token-from-x-auth-token-above&gt;' &lt;url-from-x-storage-url-above&gt;
</screen>
</listitem>
<listitem>
<para>
Check that <literal>swift</literal> works (at this point,
expect zero containers, zero objects, and zero bytes):
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass stat
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to upload a few files named
'bigfile[1-2].tgz' to a container named 'myfiles':
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile2.tgz
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to download all files from the
'myfiles' container:
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass download myfiles
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to save a backup of your builder
files to a container named 'builders'. Very important not to
lose your builders!:
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload builders /etc/swift/*.builder
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to list your containers:
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass list
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to list the contents of your
'builders' container:
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass list builders
</screen>
</listitem>
<listitem>
<para>
Use <literal>swift</literal> to download all files from the
'builders' container:
</para>
<screen>
swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass download builders
</screen>
</listitem>
</orderedlist>
</section>
<section xml:id="adding-a-proxy-server"><info><title>Adding a Proxy Server</title></info>
<para>
For reliability's sake you may want to have more than one proxy
server. You can set up the additional proxy node in the same
manner that you set up the first proxy node but with additional
configuration steps.
</para>
<para>
Once you have more than two proxies, you also want to load balance
between the two, which means your storage endpoint also changes.
You can select from different strategies for load balancing. For
example, you could use round robin dns, or an actual load balancer
(like pound) in front of the two proxies, and point your storage
url to the load balancer.
</para>
<para>
See <link linkend="configure-the-proxy-node">config-proxy</link> for the initial setup, and then follow
these additional steps.
</para>
<orderedlist>
<listitem>
<para>
Update the list of memcache servers in
/etc/swift/proxy-server.conf for all the added proxy servers.
If you run multiple memcache servers, use this pattern for the
multiple IP:port listings: `10.1.2.3:11211,10.1.2.4:11211` in
each proxy server's conf file.:
</para>
<screen>
[filter:cache]
use = egg:swift#memcache
memcache_servers = $PROXY_LOCAL_NET_IP:11211
</screen>
</listitem>
<listitem>
<para>
Change the storage url for any users to point to the load
balanced url, rather than the first proxy server you created
in /etc/swift/proxy-server.conf:
</para>
<screen>
[filter:tempauth]
use = egg:swift#tempauth
user_system_root = testpass .admin http[s]://&lt;LOAD_BALANCER_HOSTNAME&gt;:&lt;PORT&gt;/v1/AUTH_system
</screen>
</listitem>
<listitem>
<para>
Next, copy all the ring information to all the nodes,
including your new proxy nodes, and ensure the ring info gets
to all the storage nodes as well.
</para>
</listitem>
<listitem>
<para>
After you sync all the nodes, make sure the admin has the keys
in /etc/swift and the ownership for the ring file is correct.
</para>
</listitem>
</orderedlist>
</section>
<section xml:id="troubleshooting-notes"><info><title>Troubleshooting Notes</title></info>
<para>
If you see problems, look in var/log/syslog (or messages on some
distros).
</para>
<para>
Also, at Rackspace we have seen hints at drive failures by looking
at error messages in /var/log/kern.log.
</para>
<para>
There are more debugging hints and tips in the <link linkend="troubleshooting-openstack-object-storage">Troubleshooting</link> section.
</para>
</section>
</section>