...to the proxy-server.
The point is to allow the Swift proxy server to log accurate
client IP addresses when there is a proxy or SSL-terminator between the
client and the Swift proxy server. Example servers supporting this
PROXY protocol:
stud (v1 only)
stunnel
haproxy
hitch (v2 only)
varnish
See http://www.haproxy.org/download/1.7/doc/proxy-protocol.txt
The feature is enabled by adding this to your proxy config file:
[app:proxy-server]
use = egg:swift#proxy
...
require_proxy_protocol = true
The protocol specification states:
The receiver MUST be configured to only receive the protocol
described in this specification and MUST not try to guess
whether the protocol header is present or not.
so valid deployments are:
1) require_proxy_protocol = false (or missing; default is false)
and NOT behind a proxy that adds or proxies existing PROXY lines.
2) require_proxy_protocol = true
and IS behind a proxy that adds or proxies existing PROXY lines.
Specifically, in the default configuration, one cannot send the swift
proxy PROXY lines (no change from before this patch). When this
feature is enabled, one _must_ send PROXY lines.
Change-Id: Icb88902f0a89b8d980c860be032d5e822845d03a
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
This is a following patch of [1]. In [1] comment, tim suggested
implementation for same behavior with less code.
This change implemented the suggestion. See [1] for more details.
[1]: https://review.openstack.org/#/c/547306/
Change-Id: Ifd8a0534fbdf41837977028c0c6ef99f1f6ac0f0
A couple times, I've seen tests fail in the gate because we got back a
404 while trying to clean out the test account. The story that gets us
here seems to be:
- One or more object servers take too long to respond to the initial
DELETE request, so the test client gets back a 503 and sleeps so
it can retry.
- Meanwhile, the servers finish writing their tombstones and want to
respond 204 (but probably *actually* respond 408 because the proxy
killed the connection).
- The test client sends its retry, and since the object servers now
have tombstones, it gets back a 404.
But the thing is, this is *outside of the test scope* anyway, we're just
trying to get back to a sane state. If it's gone, s much the better!
For an example of this, see the failures on patchset 3 of
https://review.openstack.org/#/c/534978 (which both failed for the same
reason on different tests).
Change-Id: I9ab2fd430d4800f9f55275959a20e30f09d9e1a4
test_tempurl_keys_hidden_from_acl_readonly changes test env parameter
temporarily for container HEAD. After that the test reverts the change.
But if the HEAD failed with exception, the change is not reverted.
With the non reverted change, some other tests fails even if the test
have no problems.
This patch ensures the reversion by using try-finally.
Change-Id: I8cd7928da6211e5516992fe9f2bc8e568bcab443
... and add support for SHA-256 and SHA-512 by default. This allows us
to start moving toward replacing SHA-1-based signatures. We've known
this would eventually be necessary for a while [1], and earlier this
year we've seen SHA-1 collisions [2].
Additionally, allow signatures to be base64-encoded, provided they start
with a digest name followed by a colon. Trailing padding is optional for
base64-encoded signatures, and both normal and "url-safe" modes are
supported. For example, all of the following SHA-1 signatures are
equivalent:
da39a3ee5e6b4b0d3255bfef95601890afd80709
sha1:2jmj7l5rSw0yVb/vlWAYkK/YBwk=
sha1:2jmj7l5rSw0yVb/vlWAYkK/YBwk
sha1:2jmj7l5rSw0yVb_vlWAYkK_YBwk=
sha1:2jmj7l5rSw0yVb_vlWAYkK_YBwk
(Note that "normal" base64 encodings will require that you url encode
all "+" characters as "%2B" so they aren't misinterpretted as spaces.)
This was done for two reasons:
1. A hex-encoded SHA-512 is rather lengthy at 128 characters -- 88
isn't *that* much better, but it's something.
2. This will allow us to more-easily add support for different
digests with the same bit length in the future.
Base64-encoding is required for SHA-512 signatures; hex-encoding is
supported for SHA-256 signatures so we aren't needlessly breaking from
what Rackspace is doing.
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
Change-Id: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046
Related-Bug: #1733634
This patch updates the SLO middleware and SegmentedIterable to add
support for user-specified inlined-data segments. Such segments will
contain base64-encoded data to be added before/after an object-backed
segment within an SLO. To accommodate the potential extra data we
increase the default SLO maximum manifest size from 2MiB to 8MiB.
The default maximum number of segments remains 1000, but this will
only be enforced for object-backed segments.
This patch is a prerequisite for a future patch enabling the
download of large objects as tarballs. The TLO patch will be added
as a dependent patch later.
UpgradeImpact
=============
During a rolling upgrade, an updated proxy may write a manifest that
out-of-date proxies will not be able to read. This will resolve itself
once the upgrade completes on all nodes.
Change-Id: Ib8dc216a84d370e6da7d6b819af79582b671d699
Why weren't we doing that before?? The etag should be the same as for
GET/HEAD, and by sending it, we can assure resuming clients that they're
downlading the same object even if they didn't include an If-Match
header.
Change-Id: I4ccbd1ae3a909ecb4606ef18211d1b868f5cad86
Related-Change: Ic11662eb5c7176fbf422a6fc87a569928d6f85a1
Functional tests for symlink and versioned writes run and result in
falure even if symlink is not enabled.
This patch fixes the functional tests to run only if both of
symlink and versioned writes are enabled.
Change-Id: I5ffd0b6436e56a805784baf5ceb722effdf74884
You've got two test classes: TestContainer and TestContainerUTF8. They
each try to create the same set of containers with names of varying
lengths to make sure the container-name length limit is being honored.
Also, each test class tries to clean up pre-existing data in its
setUpClass method. If TestContainerUTF8 fails to delete a contaienr
that TestContainer made, then its testContainerNameLimit method will
fail because the container PUT response has status 202 instead of 201,
which is because the container still existed from the prior test.
I've made the test consider both 201 and 202 as success. For purposes
of testing the maximum container name length, any 2xx is fine.
Change-Id: I7b343a8ed0d12537659c051ddf29226cefa78a8f
The existing test works fine if you're running the tests on an
all-in-one, but is pretty brittle if you aren't running them on the
one and only proxy-server they're hitting.
Add 0.1s sleep to allow *some* clock slippage between client and server.
Change-Id: Iacd08e9f703d08d0092b5e8eb53fe287ba1d1596
The functional test for versioning symlinks is better located in
test_versioned_writes where it can be added to
TestObjectVersioning. This saves duplicated versioned_writes specific
setup code in test_symlink, and has the benefit of the test being
repeated for each of the versioned writes test subclasses. With a
small refactor this includes the test now running with
x-history-location mode as well as x-versions-location mode.
Related-Change: I838ed71bacb3e33916db8dd42c7880d5bb9f8e18
Change-Id: If215446c558b61c1a8aea37ce6be8fcb5a9ea2f4
Add a symbolic link ("symlink") object support to Swift. This
object will reference another object. GET and HEAD
requests for a symlink object will operate on the referenced object.
DELETE and PUT requests for a symlink object will operate on the
symlink object, not the referenced object, and will delete or
overwrite it, respectively.
POST requests are *not* forwarded to the referenced object and should
be sent directly. POST requests sent to a symlink object will
result in a 307 Error.
Historical information on symlink design can be found here:
https://github.com/openstack/swift-specs/blob/master/specs/in_progress/symlinks.rst.
https://etherpad.openstack.org/p/swift_symlinks
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Janie Richling <jrichli@us.ibm.com>
Co-Authored-By: Kazuhiro MIYAHARA <miyahara.kazuhiro@lab.ntt.co.jp>
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: I838ed71bacb3e33916db8dd42c7880d5bb9f8e18
Signed-off-by: Thiago da Silva <thiago@redhat.com>
These are better-covered by TestContainer.testContainerNameLimit and
TestFile.testNameLimit in the same file.
Change-Id: Ice48bc6492648613bc743b474d40892d7e4dcc64
Since Python 2.7, unittest in the standard library has included mulitple
facilities for skipping tests by decorators as well as an exception.
Switch to that directly, rather than importing nose.
Change-Id: I4009033473ea24f0d0faed3670db844f40051f30
Currently, our integrity checking for objects is pretty weak when it
comes to object metadata. If the extended attributes on a .data or
.meta file get corrupted in such a way that we can still unpickle it,
we don't have anything that detects that.
This could be especially bad with encrypted etags; if the encrypted
etag (X-Object-Sysmeta-Crypto-Etag or whatever it is) gets some bits
flipped, then we'll cheerfully decrypt the cipherjunk into plainjunk,
then send it to the client. Net effect is that the client sees a GET
response with an ETag that doesn't match the MD5 of the object *and*
Swift has no way of detecting and quarantining this object.
Note that, with an unencrypted object, if the ETag metadatum gets
mangled, then the object will be quarantined by the object server or
auditor, whichever notices first.
As part of this commit, I also ripped out some mocking of
getxattr/setxattr in tests. It appears to be there to allow unit tests
to run on systems where /tmp doesn't support xattrs. However, since
the mock is keyed off of inode number and inode numbers get re-used,
there's lots of leakage between different test runs. On a real FS,
unlinking a file and then creating a new one of the same name will
also reset the xattrs; this isn't the case with the mock.
The mock was pretty old; Ubuntu 12.04 and up all support xattrs in
/tmp, and recent Red Hat / CentOS releases do too. The xattr mock was
added in 2011; maybe it was to support Ubuntu Lucid Lynx?
Bonus: now you can pause a test with the debugger, inspect its files
in /tmp, and actually see the xattrs along with the data.
Since this patch now uses a real filesystem for testing filesystem
operations, tests are skipped if the underlying filesystem does not
support setting xattrs (eg tmpfs or more than 4k of xattrs on ext4).
References to "/tmp" have been replaced with calls to
tempfile.gettempdir(). This will allow setting the TMPDIR envvar in
test setup and getting an XFS filesystem instead of ext4 or tmpfs.
THIS PATCH SIGNIFICANTLY CHANGES TESTING ENVIRONMENTS
With this patch, every test environment will require TMPDIR to be
using a filesystem that supports at least 4k of extended attributes.
Neither ext4 nor tempfs support this. XFS is recommended.
So why all the SkipTests? Why not simply raise an error? We still need
the tests to run on the base image for OpenStack's CI system. Since
we were previously mocking out xattr, there wasn't a problem, but we
also weren't actually testing anything. This patch adds functionality
to validate xattr data, so we need to drop the mock.
`test.unit.skip_if_no_xattrs()` is also imported into `test.functional`
so that functional tests can import it from the functional test
namespace.
The related OpenStack CI infrastructure changes are made in
https://review.openstack.org/#/c/394600/.
Co-Authored-By: John Dickinson <me@not.mn>
Change-Id: I98a37c0d451f4960b7a12f648e4405c6c6716808
An SLO PUT requires that we HEAD every referenced object; as a result, it
can be a very time-intensive operation. This makes it difficult as a
client to differentiate between a proxy-server that's still doing work and
one that's crashed but left the socket open.
Now, clients can opt-in to receiving heartbeats during long-running PUTs
by including the query parameter
heartbeat=on
With heartbeating turned on, the proxy will start its response immediately
with 202 Accepted then send a single whitespace character periodically
until the request completes. At that point, a final summary chunk will be
sent which includes a "Response Status" key indicating success or failure
and (if successful) an "Etag" key indicating the Etag of the resulting SLO.
This mechanism is very similar to the way bulk extractions and deletions
work, and even the way SLO behaves for ?multipart-manifest=delete requests.
Note that this is opt-in: this prevents us from sending the 202 response
to existing clients that may mis-interpret it as an immediate indication
of success.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Related-Bug: 1718811
Change-Id: I65cee5f629c87364e188aa05a06d563c3849c8f3
In test.functional.test_object.TestObject.setUp, we create a container
in account 2. However, if we've only got one account, we don't skip
this class, resulting in a TypeError down in requests somewhere and a
stack trace. Since we're using account 2 in setup, we should skip the
tests if account 2 is not configured.
Change-Id: I569d98baf071d2dce7cf34a9538070f00afda388
This looks like a case of copy-paste-itis. The cross-account-copy
functest is skipped if we have no test accounts configured, but not if
we have only one.
Change-Id: Ifbefdd9aeb98e3d02c536e9d29759f86ec9af6a1
X-Delete-After: 1 is known to be flakey; use 2 instead.
When the proxy receives an X-Delete-After header, it automatically
converts it to an X-Delete-At header based on the current time. So far,
so good. But in normalize_delete_at_timestamp we convert our
time.time() + int(req.headers['X-Delete-After'])
to a string representation of an integer and in the process always round
*down*. As a result, we lose up to a second worth of object validity,
meaning the object server can (rarely) respond 400, complaining that the
X-Delete-At is in the past.
Change-Id: Ib5e5a48f5cbed0eade8ba3bca96b26c82a9f9d84
Related-Change: I643be9af8f054f33897dd74071027a739eaa2c5c
Related-Change: I10d3b9fcbefff3c415a92fa284a1ea1eda458581
Related-Change: Ifdb1920e5266aaa278baa0759fc0bfaa1aff2d0d
Related-Bug: #1597520
Closes-Bug: #1699114
Currently the functional tests fail if the storage_url contains a quoted
IPv6 address because we try to split on ':'.
But actually we don't need to split hostname and port only in order to
combine it back together lateron. Use the standard urlparse() function
instead and work with the 'netloc' part of the URL which keeps hostname
and port together.
Change-Id: I64589e5f2d6fb3cebc6768dc9e4de6264c09cbeb
Partial-Bug: 1656329
It was deprecated and we discussed on this topic in Denver PTG
for Queen cycle. Main motivation for this work is that deprecated
post_as_copy option and its gate blocks future symlink work.
Change-Id: I411893db1565864ed5beb6ae75c38b982a574476
While we're at it, have copy and copy_account raise ResponseErrors
on failure, similar to cluster_info, update_metadata, containers, info,
files, delete, initialize, read, sync_metadata, write, and post.
Related-Change: Ia8b92251718d10b1eb44a456f28d3d2569a30003
Change-Id: I9ef42d922a6b7dbf253f2f8f5df83965d8f47e0f
This patch adds support for retrieving the encryption root secret from
an external key management system. In practice, this is currently
limited to Barbican.
Change-Id: I1700e997f4ae6fa1a7e68be6b97539a24046e80b
If swift-recon/swift-get-nodes/swift-object-info is used with the
swiftdir option they will read rings from the given directory; however
they are still using /etc/swift/swift.conf to find the policies on the
current node.
This makes it impossible to maintain a local swift.conf copy (if you
don't have write access to /etc/swift) or check multiple clusters from
the same node.
Until now swift-recon was also not usable with storage policy aliases,
this patch fixes this as well.
Closes-Bug: 1577582
Closes-Bug: 1604707
Closes-Bug: 1617951
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Change-Id: I13188d42ec19e32e4420739eacd1e5b454af2ae3
Recently out gate started blowing up intermittently with a strange
case of ports mixed up. Sometimes a functional tests tries to
authorize on a port that's clearly an object server port, and
the like. As it turns out, eventlet developers added an unavoidable
SO_REUSEPORT into listen(), which makes listen(("localhost",0)
to reuse ports.
There's an issue about it:
https://github.com/eventlet/eventlet/issues/411
This patch is working around the problem while eventlet people
consider the issue.
Change-Id: I67522909f96495a6a30e1acdb79835dce2189549
We've seen some failures in the gate like
==============================
Failed 1 tests - output below:
==============================
setUpModule (test.functional.test_account)
------------------------------------------
Captured traceback:
~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "test/functional/test_account.py", line 33, in setUpModule
tf.setup_package()
File "test/functional/__init__.py", line 716, in setup_package
mem_object_server if in_mem_obj else object_server))
File "test/functional/__init__.py", line 621, in in_process_setup
create_account(AUTH_test)
File "test/functional/__init__.py", line 619, in create_account
assert(resp.status == 201)
AssertionError
...which aren't terribly useful in figuring out what went wrong.
Change-Id: I3cd31bb480dc8508828fe21416bfae33bc0985b7