setgid provides the primary group, setgroups sets the secondary
groups. Prior to this patch, we would do a setgroups with an empty
list, effectively wiping secondary groups. We now verify which
secondary groups the user is member of and escalate the privileges
accordingly.
Change-Id: I33a10edd448b3ac5aa758a8d1d70e582cf421c7d
Closes-Bug: 1269473
The reason for this is that having origin in the get_info calls causes an
infinite loop. The way that code was written it relies on GETorHEAD_base to
populate the data- the only problem is that the HEAD call is wrapped by
cors_validation which calls get_info and round and round we go. imo get_info
should be refactored to not work this way (relying on this other call to do
stuff behind scenes and then magically your stuff is there) because it seems
pretty prone to breaking. But I'll let somebody else do that :).
Fixes bug 1270039
Change-Id: Idad3cedd965e0d5fb068b062fe8fef301c87b75a
Unit tests in test/unit only have one dependency
to swiftclient in test_direct_client.py. This one
can be easily avoided and this patch removes it.
Change-Id: Ic1c78bc7f7fe426e8f7d8209a783342a0c4f071f
These are the ones that affect what requests a client can make; the
others are just time and speed limits, so they weren't as interesting.
Change-Id: I21c19c950227f02725aafc309a3996fc6749a383
This way, with zero additional effort, SLO will support enhancements
to object storage and retrieval, such as:
* automatic resume of GETs on broken connection (today)
* storage policies (in the near future)
* erasure-coded object segments (in the far future)
This also lets SLOs work with other sorts of hypothetical third-party
middleware, for example object compression or encryption.
Getting COPY to work here is sort of a hack; the proxy's object
controller now checks for "swift.copy_response_hook" in the request's
environment and feeds the GET response (the source of the new object's
data) through it. This lets a COPY of a SLO manifest actually combine
the segments instead of merely copying the manifest document.
Updated ObjectController to expect a response's app_iter to be an
iterable, not just an iterator. (PEP 333 says "When called by the
server, the application object must return an iterable yielding zero
or more strings." ObjectController was just being too strict.) This
way, SLO can re-use the same response-generation logic for GET and
COPY requests.
Added a (sort of hokey) mechanism to allow middlewares to close
incompletely-consumed app iterators without triggering a warning. SLO
does this when it realizes it's performed a ranged GET on a manifest;
it closes the iterable, removes the range, and retries the
request. Without this change, the proxy logs would get 'Client
disconnected on read' in them.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ic11662eb5c7176fbf422a6fc87a569928d6f85a1
New util's function validate_hash_conf allows you to programmatically reload
swift.conf and hash path global vars HASH_PATH_SUFFIX and HASH_PATH_PREFIX
when they are invalid.
When you load swift.common.utils before you have a swift.conf there's no good
way to force a re-read of swift.conf and repopulate the hash path config
options - short of restarting the process or reloading the module - both of
which are hard to unittest. This should be no worse in general and in some
cases easier.
Change-Id: I1ff22c5647f127f65589762b3026f82c9f9401c1
Summary of the new configuration option:
The cluster operators add the container_sync middleware to their
proxy pipeline and create a container-sync-realms.conf for their
cluster and copy this out to all their proxy and container servers.
This file specifies the available container sync "realms".
A container sync realm is a group of clusters with a shared key that
have agreed to provide container syncing to one another.
The end user can then set the X-Container-Sync-To value on a
container to //realm/cluster/account/container instead of the
previously required URL.
The allowed hosts list is not used with this configuration and
instead every container sync request sent is signed using the realm
key and user key.
This offers better security as source hosts can be faked much more
easily than faking per request signatures. Replaying signed requests,
assuming it could easily be done, shouldn't be an issue as the
X-Timestamp is part of the signature and so would just short-circuit
as already current or as superceded.
This also makes configuration easier for the end user, especially
with difficult networking situations where a different host might
need to be used for the container sync daemon since it's connecting
from within a cluster. With this new configuration option, the end
user just specifies the realm and cluster names and that is resolved
to the proper endpoint configured by the operator. If the operator
changes their configuration (key or endpoint), the end user does not
need to change theirs.
DocImpact
Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37
All the other tests have license headers, so this one should too.
I picked 2013 for the copyright year because that's when "git log"
says it was first and last touched.
Change-Id: Idd41a179322a3383f6992e72d8ba3ecaabd05c47
We attempt to get the code coverage (with branch coverage) to 100%,
but fall short because due to interactions between coverage.py and
CPython's peephole optimizer. See:
https://bitbucket.org/ned/coveragepy/issue/198/continue-marked-as-not-covered
In the main diskfile module, we remove the check for a valid
"self._tmppath" since it is only one of a number of fields that could
be verified and it was not worth trying to get coverage for it. We
also remove the try / except around the close() method call in the
DiskFileReader's app_iter_ranges() method since it will never be
called in a context that will raise a quarantine exception (by
definition ranges can't generate a quarantine event).
We also:
* fix where quarantine messages are checked to ensure the
generator is actually executed before the check
* in new and modified tests:
* use assertTrue in place of assert_
* use assertEqual in place of assertEquals
* fix references to the reserved word "object"
Change-Id: I6379be04adfc5012cb0b91748fb3ba3f11200b48
Move the tests from functionalnosetests under functional, so we no
longer have two seperate trees for functional tests. This also drops
the 'nose' name from the directory, so that it doesn't end up with
confusion if we move to testr. Further, since there are no longer two
test runs in .functests, it nows looks very close to the other two.
Change-Id: I8de025c29d71f05072e257df24899927b82c1382
Middleware or core features may need to store metadata
against accounts or containers. This patch adds a
generic mechanism for system metadata to be persisted
in backend databases, without polluting the user
metadata namespace, by using the reserved header
namespace x-<server_type>-sysmeta-*.
Modifications are firstly that backend servers persist
system metadata headers alongside user metadata and
other system state.
For accounts and containers, system metadata in PUT
and POST requests is treated in a similar way to user
metadata. System metadata is not yet supported for
object requests.
Secondly, changes in the proxy controllers ensure that
headers in the system metadata namespace will pass through
in requests to backend servers.
Thirdly, system metadata returned from backend servers
in GET or HEAD responses is added to the cached info
dict, which middleware can access.
Finally, a gatekeeper middleware module is provided
which filters all system metadata headers from requests
and responses by removing headers with names starting
x-account-sysmeta-, x-container-sysmeta-. The gatekeeper
also removes headers starting x-object-sysmeta- in
anticipation of future support for system metadata being
set for objects. This prevents clients from writing or
reading system metadata.
The required_filters list in swift/proxy/server.py is
modified to include the gatekeeper middleware so that
if the gatekeeper has not been configured in the
pipeline then it will be automatically inserted close
to the start of the pipeline.
blueprint cluster-federation
Change-Id: I80b8b14243cc59505f8c584920f8f527646b5f45
- swift-recon now handles parsing instances where 'mounted' key (in unmounted
and disk_usage) is an error message instead of a bool.
- Add's checkmount exception handling to the recon umounted endpoint.
- Updates existing unittest to have ismount throw an error.
- Updates unittests to cover the corner cases
Change-Id: Id51d14a8b98de69faaac84b2b34b7404b7df69e9
According to HTTP/1.1, servers MUST accept all three formats:
Sun, 06 Nov 1994 08:49:37 GMT # RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT # RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 # ANSI C's asctime() format
In functional tests, a date value header has 3 kinds of format will be
tested.
Change-Id: I679ed44576208f2a79bffce787cb55bda4b39705
Closes-Bug: #1253207
the Last-Modified header in Response didn't have a suitable
value - an integer part of object's timestamp.
This leads that the the if-[un]modified-since header with the
value from last-modified is always earlier than timestamp
and results the content is always newer than value of these
conditional headers.
Patched code returns math.ceil() of object's timestamp
in Last-Modified header so the later conditional header works
correctly
Closes-Bug: #1248818
Change-Id: I1ece7d008551bf989da74d23f0ed6307c45c5436
If you create a container with a non-ASCII name, and then make another
container with X-Versions-Location: first-cøntåîner, *and* you're
serializing stuff in memcache as json (the default), when the proxy
tries to make a versioned object, it will crash.
The fix is to make sure that get_container_info() always returns strs,
not unicodes.
The long-term fix would be to get rid of simplejson entirely, as its
decoder can't make up its mind whether JSON strings should be Python
strs or unicodes, and that makes it really really easy to write bugs
like this.
Change-Id: Ib20ea5fb884484a4246d7a21a9f1e2ffd82eb04f
Now the traceback goes all the way down to where the exception came
from, not just down to run_in_thread. Better for debugging.
Change-Id: Iac6acb843a6ecf51ea2672a563d80fa43d731f23
This commit adds a hook for WSGI applications
(e.g. proxy.server.Application) to modify their WSGI pipelines. This
is currently used by the proxy server to ensure that catch_errors is
present; if it is missing, it is inserted as the first middleware in
the pipeline.
This lets us write new, mandatory middlewares for Swift without
breaking existing deployments on upgrade.
Change-Id: Ibed0f2edb6f80c25be182b3d4544e6a67c5050ad
If a client passes us a non-integer value for if-delete-at we'll now
properly report a 400 error instead of a 503.
Closes-Bug: 1259300
Change-Id: I8bb0bb9aa158d415d4f491b5802048f0cd4d8ef6