The SAIO is purpously cut into two parts, so that you don't have to switch
back and forth between root and your unprivledged user. Add some "note" box
callouts to highlight this changeover.
Change-Id: I8b1a8f0539eac60d4121bdd4dab01df75ecca207
This creates a pool to each memcache server so that connections will not
grow without bound. This also adds a proxy config
"max_memcache_connections" which can control how many connections are
available in the pool.
A side effect of the change is that we had to change the memcache calls
that used noreply, and instead wait for the result of the request.
Leaving with noreply could cause a race condition (specifically in
account auto create), due to one request calling `memcache.del(key)` and
then `memcache.get(key)` with a different pooled connection. If the
delete didn't complete fast enough, the get would return the old value
before it was deleted, and thus believe that the account was not
autocreated.
ClaysMindExploded
DocImpact
Change-Id: I350720b7bba29e1453894d3d4105ac1ea232595b
If you don't, then newer versions of xattr won't install, and since
our xattr requirement is simply ">= 0.4" in requirements.txt, this
affects anyone setting up a new SAIO.
This happened with xattr 0.7, which was released on 2013-07-19.
Change-Id: Iaf335fa25a2908953d1fd218158ebedf5d01cc27
Place all the methods related to on-disk layout and / or configuration
into a new common module that can be shared by the various modules
using the same on-disk layout.
Change-Id: I27ffd4665d5115ffdde649c48a4d18e12017e6a9
Signed-off-by: Peter Portante <peter.portante@redhat.com>
If handoffs_first is True, then the object replicator will give
partitions that are not supposed to be on the node priority.
If handoff_delete is set to a number (n), then it will delete a handoff
partition if at least n replicas were successfully replicated
Also fixed a couple of things in the object replicator unit tests and
added some more
DocImpact
Change-Id: Icb9968953cf467be2a52046fb16f4b84eb5604e4
The main purpose of this patch is to lay the groundwork for allowing
the container and account servers to optionally use pluggable backend
implementations. The backend.py files will eventually be the module
where the backend APIs are defined via docstrings of this reference
implementation. The swift/common/db.py module will remain an internal
module used by the reference implementation.
We have a raft of changes to docstrings staged for later, but this
patch takes care to relocate ContainerBroker and AccountBroker into
their new home intact.
Change-Id: Ibab5c7605860ab768c8aa5a3161a705705689b04
These are headers that will be stripped unless the WSGI environment
contains a true value for 'swift_owner'. The exact definition of a
swift_owner is up to the auth system in use, but usually indicates
administrative responsibilities.
DocImpact
Change-Id: I972772fbbd235414e00130ca663428e8750cabca
Making it possible for one to overwrite the default set of regexes
used to search for device block errors in the log file. Also making
the log file naming pattern configurable by setting them in the
drive-audit.conf file.
Updating "Detecting Failed Drives" section on the admin guide as well.
Change-Id: I7bd3acffed196da3e09db4c9dcbb48a20bdd1cf0
Change the default value of wsgi workers from 1 to auto. The new default
value for workers in the proxy, container, account & object wsgi servers will
spawn as many workers per process as you have cpu cores.
This will not be ideal for some configurations, but it's much more likely to
produce a successful out of the box deployment.
Inspect the number of cpu_cores using python's multiprocessing when available.
Multiprocessing was added in python 2.6, but I know I've compiled python
without it before on accident. The cpu_count method seems to be pretty system
agnostic, but it says it can raise NotImplementedError or sometimes return 0.
Add a new utility method 'config_auto_int_value' to pull an integer out of the
config which has a dynamic default.
* drive by s/container/proxy/ in proxy-server.conf.5
* fix misplaced max_clients in *-server.conf-sample
* update doc/development_saio to force workers = 1
DocImpact
Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95
This reverts commit 68cb91097b75a92237bd90caffcd405c3e83cb53
Just so this is not get forgotten in the tree...
We are using daemon mode and chunked is not supported in this mode.
In past couple of years, the XFS team has greatly improved inode use in
xfs. With more recent kernels, there is no performance penalty for
using the default inode size, and a smaller inode size gives us
improvements in other areas where disk access is involved.
DocImpact
Change-Id: Ie9da53a6e8bf43d1d02881befbb52595462c9f2e
As reported in the documentation bug, the apache deployment guide's
reference to apache2 mod_wsgi not supporting client chunked encoding
has become outdated. It now supports this feature, using an optional
parameter.
Updated the paragraph in question to reflect this
patchset 2 mentions the WSGIChunkedRequest variable and adds it
to the sample configs - On by default. Feedback welcome
fixes bug 1194935
Change-Id: I07c5c8506ac34e1e0e08fa6d961babde2f9b7367
Making this smaller (10 instead of 18) can make some of the tests run
faster and makes rebuilding of the rings faster.
Change-Id: Ibe46011d8e6a6482d39b3a20ac9c091d9fbc6ef7
The proxy can now be configured to prefer local object servers for PUT
requests, where "local" is governed by the "write_affinity". The
"write_affinity_node_count" setting controls how many local object
servers to try before giving up and going on to remote ones.
I chose to simply re-order the object servers instead of filtering out
nonlocal ones so that, if all of the local ones are down, clients can
still get successful responses (just slower).
The goal is to trade availability for throughput. By writing to local
object servers across fast LAN links, clients get better throughput
than if the object servers were far away over slow WAN links. The
downside, of course, is that data availability (not durability) may
suffer when drives fail.
The default configuration has no write affinity in it, so the default
behavior is unchanged.
Added some words about these settings to the admin guide.
DocImpact
Change-Id: I09a0bd00524544ff627a3bccdcdc48f40720a86e
Lucid won't EOL until May of 2014; but I stopped trusting that ppa a long time
ago. Besides with the requires for dnspython and mock where they're at you
almost can't install swift from source on any stock distro and expect tests to
pass with system packages - so we're looking at pypi for depends regardless.
While I'm in there:
* more explanation of <your-user-name> and a helpful find/sed for configs
* group the "setup ~/.bashrc" stuff with the "setup ~/bin" stuff
* some updates/fixes from my experience installing on CentOS
* remove region warnings from remakerings
Change-Id: Ie2e6b06959ab699d853e07e5b7e8cda7036a44fe
improving points:
1. Remove yum install swift in Fedora; Use installing from source for
both Ubuntu and Fedora.
2. Explain you could use all users including root, your own guest. An
d the points developer have to care.
Change-Id: Id6d683441bd790a21734624e29eb7c98bb40de85
Fixes: bug #1126389
Two types of parallelism are added:
- concurrency to speed up what a single process does
- a way to run multiple daemons to work on different parts of the work
DocImpact
Change-Id: I48997f68eb2fd8de19a5ee8b9fcdf76dde2ba0ab
Without a (per-disk) threadpool, requests to a slow disk would affect
all clients by blocking the entire eventlet reactor on
read/write/etc. The slower the disk, the worse the performance. On an
object server, you frequently have at least one slow disk due to
auditing and replication activity sucking up all the available IO. By
kicking those blocking calls out to a separate OS thread, we let the
eventlet reactor make progress in other greenthreads, and by having a
per-disk pool, we ensure that one slow disk can't suck up all the
resources of an entire object server.
There were a few blocking calls that were done with eventlet.tpool,
but that's a fixed-size global threadpool, so I moved them to the
per-disk threadpools. If the object server is configured not to use
per-disk threadpools, (i.e. threads_per_disk = 0, which is the
default), those call sites will still ultimately end up using
eventlet.tpool.execute. You won't end up blocking a whole object
server while waiting for a huge fsync.
If you decide not to use threadpools, the only extra overhead should
be a few extra Python function calls here and there. This is
accomplished by setting threads_per_disk = 0 in the config.
blueprint concurrent-disk-io
Change-Id: I490f8753d926fdcee3a0c65c5aaf715bc2b7c290
- Add proxy-logging to multinode. We had it since Folsom and people
still forget it, resulting in missing logs.
- Use correct name, for ease hit with '*' in vi at least.
Admittedly trivial changes, which I meant to hold until Leah's major
doc improvement lands, but I'm tired of keeping stuff like this in
my working repo.
Change-Id: I44f80c51d6d7329a9b696e67fcb8a895db63e497
DocImpact
If account reaper has not managed to clean out an account after a long
period, it prints a message to the log (you can search your system looking
for such messages). Introduce reap_warn_after config variable to determine
when to emit the message (defaults to 30 days).
Also fix bug 1181995 (edge case where object name is an empty string)
Change-Id: Ic0dfee04742d06b6a51b59f302d7a272d7c1de92
The crossdomain doc was named *.xml instead of *.rst causing it to not
get built or included in the toctree where it was supposed to.
The apache deployment guide wasn't linked to from anywhere, so I added
it under the normal deployment guide.
Change-Id: I817a1f2ca1ed7913e8ea5155cc1fac07caf0b637
Allow Swift daemons and servers to optionally accept a directory as the
configuration parameter. Directory based configuration leverages
ConfigParser's native multi-file support. Files ending in '.conf' in the
given directory are parsed in lexicographical order. Filenames starting with
'.' are ignored. A mixture of file and directory configuration paths is not
supported - if the configuration path is a file behavior is unchanged.
* update swift-init to search for conf.d paths when building servers
(e.g. /etc/swift/proxy-server.conf.d/)
* new script swift-config can be used to inspect the cumulative configuration
* pull a little bit of code out of run_wsgi and test separately
* fix example config bug for the proxy servers client_disconnect option
* added section on directory based configuration to deployment guide
DocImpact
Implements: blueprint confd
Change-Id: I89b0f48e538117f28590cf6698401f74ef58003b
The new max_clients parameter allows one full control over the maximum
number of client requests that will be handled by a given worker for
any of the proxy, account, container or object servers.
Lowering the number of clients handled per worker, and raising the
number of workers can lessen the impact that a CPU intensive, or
blocking, request can have on other requests served by the same
worker.
If the maximum number of clients is set to one, then a given worker
will not perform another accept(2) call while processing, allowing
other workers a chance to process it.
DocImpact
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Change-Id: Ic01430f7a6c5ff48d7aa349dc86a5f8ac463a420
Allows client-side technologies such as Flash, Java and Silverlight running
on web pages served elsewhere to interact with the Swift API.
Bug #1159960
Change-Id: I7d0533a0aaf189ac452abbd983469acb064fdca4