Add overview and example information for using Storage Policies.
DocImpact
Implements: blueprint storage-policies
Change-Id: I6f11f7a1bdaa6f3defb3baa56a820050e5f727f1
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
Support separate replication ip address:
- Added new function in utils. This function provides ability
to select separate IP address for replication service.
- Db_replicator and object replicators were changed.
Replication process uses new function now.
Replication network parameters:
- Replication network fields (replication_ip, replication_port)
support was added to device dictionary in swift-ring-builder script.
- Changes were made to support new fields in search, show and set_info
functions.
Implementation of replication servers:
- Separate replication servers use the same code as normal replication
servers, but with replication_server parameter = True. When using a
separate replication network, the non-replication servers set
replication_server = False. When there is no separate replication
network (the default case), replication_server is not included in the config.
DocImpact
Change-Id: Ie9af5bdcdf9241c355e36053ca4adfe49dc35bd0
Implements: blueprint dedicated-replication-network
These are mostly cosmetic fixes for irritating imperfections:
- "separated with commas" was duplicated, leave just one
- extra whitespace here and there, man pages are not PEP8, drop
- weird extra commas, drop
- Fedora logs to /var/log/messages
- "drive is has failed", drop "is"
Change-Id: I5ceba2e61b16db4855d76c92cbc83663b9b2a0da
A replicator creates an extra replica when it detects a remote disk
failure. However, when it fails to create a replica due to other
reasons (e.g. entire node failures), it doesn't create another replica
at all. We should explain it explicitly so that users can know it.
This fixes bug 906976.
Change-Id: I2f56428ccbbb0cf0d8538ca6e08f7da71257e661