Documentation fixups
These are mostly cosmetic fixes for irritating imperfections: - "separated with commas" was duplicated, leave just one - extra whitespace here and there, man pages are not PEP8, drop - weird extra commas, drop - Fedora logs to /var/log/messages - "drive is has failed", drop "is" Change-Id: I5ceba2e61b16db4855d76c92cbc83663b9b2a0da
This commit is contained in:
parent
e88ff34685
commit
93ea7c63b1
@ -192,7 +192,10 @@ Logging address. The default is /dev/log.
|
||||
.IP "\fBset log_headers\fR "
|
||||
Enables the ability to log request headers. The default is False.
|
||||
.IP \fBmemcache_servers\fR
|
||||
If not set in the configuration file, the value for memcache_servers will be read from /etc/swift/memcache.conf (see memcache.conf-sample) or lacking that file, it will default to the value below. You can specify multiple servers separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211. This can be a list separated by commas. The default is 127.0.0.1:11211.
|
||||
If not set in the configuration file, the value for memcache_servers will be
|
||||
read from /etc/swift/memcache.conf (see memcache.conf-sample) or lacking that
|
||||
file, it will default to 127.0.0.1:11211. You can specify multiple servers
|
||||
separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211.
|
||||
.IP \fBmemcache_serialization_support\fR
|
||||
This sets how memcache values are serialized and deserialized:
|
||||
.RE
|
||||
@ -533,8 +536,6 @@ per second. The default is 1.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
|
||||
|
||||
.SH DOCUMENTATION
|
||||
.LP
|
||||
More in depth documentation about the swift-proxy-server and
|
||||
@ -543,8 +544,5 @@ also Openstack-Swift as a whole can be found at
|
||||
and
|
||||
.BI http://swift.openstack.org
|
||||
|
||||
|
||||
.SH "SEE ALSO"
|
||||
.BR swift-proxy-server(1),
|
||||
|
||||
|
||||
.BR swift-proxy-server(1)
|
||||
|
@ -167,7 +167,6 @@ Setting up rsync
|
||||
read only = false
|
||||
lock file = /var/lock/container6041.lock
|
||||
|
||||
|
||||
[object6010]
|
||||
max connections = 25
|
||||
path = /srv/1/node/
|
||||
@ -750,8 +749,9 @@ Debugging Issues
|
||||
|
||||
If all doesn't go as planned, and tests fail, or you can't auth, or something doesn't work, here are some good starting places to look for issues:
|
||||
|
||||
#. Everything is logged in /var/log/syslog, so that is a good first place to
|
||||
look for errors (most likely python tracebacks).
|
||||
#. Everything is logged using system facilities -- usually in /var/log/syslog,
|
||||
but possibly in /var/log/messages on e.g. Fedora -- so that is a good first
|
||||
place to look for errors (most likely python tracebacks).
|
||||
#. Make sure all of the server processes are running. For the base
|
||||
functionality, the Proxy, Account, Container, and Object servers
|
||||
should be running.
|
||||
|
@ -8,7 +8,7 @@ Replication uses a push model, with records and files generally only being copie
|
||||
|
||||
Every deleted record or file in the system is marked by a tombstone, so that deletions can be replicated alongside creations. These tombstones are cleaned up by the replication process after a period of time referred to as the consistency window, which is related to replication duration and how long transient failures can remove a node from the cluster. Tombstone cleanup must be tied to replication to reach replica convergence.
|
||||
|
||||
If a replicator detects that a remote drive is has failed, it will use the ring's "get_more_nodes" interface to choose an alternate node to synchronize with. The replicator can maintain desired levels of replication in the face of disk failures, though some replicas may not be in an immediately usable location. Note that the replicator doesn't maintain desired levels of replication in the case of other failures (e.g. entire node failures) because the most of such failures are transient.
|
||||
If a replicator detects that a remote drive has failed, it will use the ring's "get_more_nodes" interface to choose an alternate node to synchronize with. The replicator can maintain desired levels of replication in the face of disk failures, though some replicas may not be in an immediately usable location. Note that the replicator doesn't maintain desired levels of replication in the case of other failures (e.g. entire node failures) because the most of such failures are transient.
|
||||
|
||||
Replication is an area of active development, and likely rife with potential improvements to speed and correctness.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user