Troubleshoot Object Storage
For Object Storage, everything is logged in /var/log/syslog (or messages on some distros).
Several settings enable further customization of logging, such as log_name, log_facility,
and log_level, within the object server configuration files.
Drive failure
In the event that a drive has failed, the first step is to make sure the drive is
unmounted. This will make it easier for Object Storage to work around the failure until
it has been resolved. If the drive is going to be replaced immediately, then it is just
best to replace the drive, format it, remount it, and let replication fill it up.
If the drive can't be replaced immediately, then it is best to leave it
unmounted, and remove the drive from the ring. This will allow all the replicas
that were on that drive to be replicated elsewhere until the drive is replaced.
Once the drive is replaced, it can be re-added to the ring.
You can look at error messages in /var/log/kern.log for hints of drive failure.
Server failure
If a server is having hardware issues, it is a good idea to make sure the
Object Storage services are not running. This will allow Object Storage to
work around the failure while you troubleshoot.
If the server just needs a reboot, or a small amount of work that should only
last a couple of hours, then it is probably best to let Object Storage work
around the failure and get the machine fixed and back online. When the machine
comes back online, replication will make sure that anything that is missing
during the downtime will get updated.
If the server has more serious issues, then it is probably best to remove all
of the server's devices from the ring. Once the server has been repaired and is
back online, the server's devices can be added back into the ring. It is
important that the devices are reformatted before putting them back into the
ring as it is likely to be responsible for a different set of partitions than
before.
Detect failed drives
It has been our experience that when a drive is about to fail, error messages will spew into
/var/log/kern.log. There is a script called swift-drive-audit that can be run via cron
to watch for bad drives. If errors are detected, it will unmount the bad drive, so that
Object Storage can work around it. The script takes a configuration file with the
following settings:
This script has only been tested on Ubuntu 10.04, so if you are using a
different distro or OS, some care should be taken before using in production.
Emergency recovery of ring builder files
You should always keep a backup of swift ring builder files. However, if an
emergency occurs, this procedure may assist in returning your cluster to an
operational state.
Using existing swift tools, there is no way to recover a builder file from a
ring.gz file. However, if you have a knowledge of Python, it is possible to
construct a builder file that is pretty close to the one you have lost. The
following is what you will need to do.
This procedure is a last-resort for emergency circumstances. It
requires knowledge of the swift python code and may not succeed.
First, load the ring and a new ringbuilder object in a Python REPL:
>>> from swift.common.ring import RingData, RingBuilder
>>> ring = RingData.load('/path/to/account.ring.gz')
Now, start copying the data we have in the ring into the builder.
>>> import math
>>> partitions = len(ring._replica2part2dev_id[0])
>>> replicas = len(ring._replica2part2dev_id)
>>> builder = RingBuilder(int(math.log(partitions, 2)), replicas, 1)
>>> builder.devs = ring.devs
>>> builder._replica2part2dev = ring._replica2part2dev_id
>>> builder._last_part_moves_epoch = 0
>>> from array import array
>>> builder._last_part_moves = array('B', (0 for _ in xrange(partitions)))
>>> builder._set_parts_wanted()
>>> for d in builder._iter_devs():
d['parts'] = 0
>>> for p2d in builder._replica2part2dev:
for dev_id in p2d:
builder.devs[dev_id]['parts'] += 1
This is the extent of the recoverable fields. For
min_part_hours you'll either have to remember what the
value you used was, or just make up a new one.
>>> builder.change_min_part_hours(24) # or whatever you want it to be
Next, validate the builder. If this raises an exception, check
your previous code. When it validates, you're ready to save the
builder and create a new account.builder.
>>> builder.validate()
Save the builder.
>>> import pickle
>>> pickle.dump(builder.to_dict(), open('account.builder', 'wb'), protocol=2)
>>> exit ()
You should now have a file called 'account.builder' in the current working
directory. Next, run swift-ring-builder account.builder write_ring
and compare the new account.ring.gz to the account.ring.gz that you started
from. They probably won't be byte-for-byte identical, but if you load them up
in a REPL and their _replica2part2dev_id and
devs attributes are the same (or nearly so), then you're
in good shape.
Next, repeat the procedure for container.ring.gz
and object.ring.gz, and you might get usable builder files.