The reason for this is that having origin in the get_info calls causes an
infinite loop. The way that code was written it relies on GETorHEAD_base to
populate the data- the only problem is that the HEAD call is wrapped by
cors_validation which calls get_info and round and round we go. imo get_info
should be refactored to not work this way (relying on this other call to do
stuff behind scenes and then magically your stuff is there) because it seems
pretty prone to breaking. But I'll let somebody else do that :).
Fixes bug 1270039
Change-Id: Idad3cedd965e0d5fb068b062fe8fef301c87b75a
Fix Error 400 Header Line Too Long when using Identity v3 PKI Tokens
Uses swift.conf max_header_size option to set wsgi.MAX_HEADER_LINE,
allowing the operator to customize this parameter.
The default value has been let to 8192 to avoid unexpected
configuration change on deployed platforms. The max_header_size option
has to be increased (for example to 16384), to accomodate for large
Identity v3 PKI tokens, including more than 7 catalog entries.
The default max header line size of 8192 is exceeded in the following
scenario:
- Auth tokens generated by Keystone v3 API include the catalog.
- Keystone's catalog contains more than 7 services.
Similar fixes have been merged in other projects.
Change-Id: Ia838b18331f57dfd02b9f71d4523d4059f38e600
Closes-Bug: 1190149
If you GET /info?swiftinfo_sig=<HMAC>&swiftinfo_expires=<TIME>, then
the response contains any admin-only information that's been
registered (via calls to register_swift_info(admin=True)).
The bad news is that the info controller isn't using streq_const_time
to compare the valid HMAC signatures with the passed-in one, leading
to a possible timing attack.
The good news is that this isn't a security hole since there's
absolutely nothing interesting in the admin-only section yet, so even
if an attacker does suss out a valid signature, they don't learn
anything. However, we should still fix this in case anything
interesting ever does get added.
Change-Id: I28b6814def67200ddaa6272e09f6a55fb517fd97
Unit tests in test/unit only have one dependency
to swiftclient in test_direct_client.py. This one
can be easily avoided and this patch removes it.
Change-Id: Ic1c78bc7f7fe426e8f7d8209a783342a0c4f071f
Use constant time comparison when evaluating tempURL to avoid timing
attacks (CVE-2014-0006).
Fixes bug 1265665
Change-Id: I11e4ad83cc4077e52adf54a0bd0f9749294b2a48
These are the ones that affect what requests a client can make; the
others are just time and speed limits, so they weren't as interesting.
Change-Id: I21c19c950227f02725aafc309a3996fc6749a383
This way, with zero additional effort, SLO will support enhancements
to object storage and retrieval, such as:
* automatic resume of GETs on broken connection (today)
* storage policies (in the near future)
* erasure-coded object segments (in the far future)
This also lets SLOs work with other sorts of hypothetical third-party
middleware, for example object compression or encryption.
Getting COPY to work here is sort of a hack; the proxy's object
controller now checks for "swift.copy_response_hook" in the request's
environment and feeds the GET response (the source of the new object's
data) through it. This lets a COPY of a SLO manifest actually combine
the segments instead of merely copying the manifest document.
Updated ObjectController to expect a response's app_iter to be an
iterable, not just an iterator. (PEP 333 says "When called by the
server, the application object must return an iterable yielding zero
or more strings." ObjectController was just being too strict.) This
way, SLO can re-use the same response-generation logic for GET and
COPY requests.
Added a (sort of hokey) mechanism to allow middlewares to close
incompletely-consumed app iterators without triggering a warning. SLO
does this when it realizes it's performed a ranged GET on a manifest;
it closes the iterable, removes the range, and retries the
request. Without this change, the proxy logs would get 'Client
disconnected on read' in them.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ic11662eb5c7176fbf422a6fc87a569928d6f85a1
New util's function validate_hash_conf allows you to programmatically reload
swift.conf and hash path global vars HASH_PATH_SUFFIX and HASH_PATH_PREFIX
when they are invalid.
When you load swift.common.utils before you have a swift.conf there's no good
way to force a re-read of swift.conf and repopulate the hash path config
options - short of restarting the process or reloading the module - both of
which are hard to unittest. This should be no worse in general and in some
cases easier.
Change-Id: I1ff22c5647f127f65589762b3026f82c9f9401c1
Summary of the new configuration option:
The cluster operators add the container_sync middleware to their
proxy pipeline and create a container-sync-realms.conf for their
cluster and copy this out to all their proxy and container servers.
This file specifies the available container sync "realms".
A container sync realm is a group of clusters with a shared key that
have agreed to provide container syncing to one another.
The end user can then set the X-Container-Sync-To value on a
container to //realm/cluster/account/container instead of the
previously required URL.
The allowed hosts list is not used with this configuration and
instead every container sync request sent is signed using the realm
key and user key.
This offers better security as source hosts can be faked much more
easily than faking per request signatures. Replaying signed requests,
assuming it could easily be done, shouldn't be an issue as the
X-Timestamp is part of the signature and so would just short-circuit
as already current or as superceded.
This also makes configuration easier for the end user, especially
with difficult networking situations where a different host might
need to be used for the container sync daemon since it's connecting
from within a cluster. With this new configuration option, the end
user just specifies the realm and cluster names and that is resolved
to the proper endpoint configured by the operator. If the operator
changes their configuration (key or endpoint), the end user does not
need to change theirs.
DocImpact
Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37
All the other tests have license headers, so this one should too.
I picked 2013 for the copyright year because that's when "git log"
says it was first and last touched.
Change-Id: Idd41a179322a3383f6992e72d8ba3ecaabd05c47
A note was added stating that the same limitations apply to
account quotas as for container quotas. An example on uploads
without a content-length headers was added.
Related-Bug: 1267659
Change-Id: Ic29b527cb71bf5903c2823844a1cf685ab6813dd
We attempt to get the code coverage (with branch coverage) to 100%,
but fall short because due to interactions between coverage.py and
CPython's peephole optimizer. See:
https://bitbucket.org/ned/coveragepy/issue/198/continue-marked-as-not-covered
In the main diskfile module, we remove the check for a valid
"self._tmppath" since it is only one of a number of fields that could
be verified and it was not worth trying to get coverage for it. We
also remove the try / except around the close() method call in the
DiskFileReader's app_iter_ranges() method since it will never be
called in a context that will raise a quarantine exception (by
definition ranges can't generate a quarantine event).
We also:
* fix where quarantine messages are checked to ensure the
generator is actually executed before the check
* in new and modified tests:
* use assertTrue in place of assert_
* use assertEqual in place of assertEquals
* fix references to the reserved word "object"
Change-Id: I6379be04adfc5012cb0b91748fb3ba3f11200b48
To get the proxy's read affinity to work, you have to set both
"read_affinity = <stuff>" and "sorting_method = affinity" in the proxy
config. If you set the first but not the second, then you don't get
read affinity, and Swift doesn't help you determine why not.
Now the proxy will emit a warning message if read_affinity is set but
sorting_method is a value other than "affinity", so if you check your
logs to see why it isn't working, you'll get a hint.
Note that the message comes out twice per proxy process, so with 2
workers you'll see the warning 6 times on startup (2 for master + 2 *
2 per worker). It's sort of annoying, but at least it's not
per-request.
Bonus docstring fix: remove a sentence that's not true
Change-Id: Iad37d4979a1b7c45c0e3d1b83336dbcf7a68a0c9
When you're trying to troubleshoot why all your objects are getting
quarantined, it's really nice to have some logging. If nothing else,
it's nice to know which process did it.
Change-Id: I6e8be6df938659f7392891df9336ed70bd155706
Move the tests from functionalnosetests under functional, so we no
longer have two seperate trees for functional tests. This also drops
the 'nose' name from the directory, so that it doesn't end up with
confusion if we move to testr. Further, since there are no longer two
test runs in .functests, it nows looks very close to the other two.
Change-Id: I8de025c29d71f05072e257df24899927b82c1382