This only applies to post-sync REPLICATE calls, none of which actually
look at the response anyway.
Change-Id: I1de62140e7eb9a23152bb9fdb1fa0934e827bfda
The repo is Python using both Python 2 and 3 now, so update hacking to
version 2.0 which supports Python 2 and 3. Note that latest hacking
release 3.0 only supports version 3.
Fix problems found.
Remove hacking and friends from lower-constraints, they are not needed
for installation.
Change-Id: I9bd913ee1b32ba1566c420973723296766d1812f
Reserve the namespace starting with the NULL byte for internal
use-cases. Backend services will allow path names to include the NULL
byte in urls and validate names in the reserved namespace. Database
services will filter all names starting with the NULL byte from
responses unless the request includes the header:
X-Backend-Allow-Reserved-Names: true
The proxy server will not allow path names to include the NULL byte in
urls unless a middlware has set the X-Backend-Allow-Reserved-Names
header. Middlewares can use the reserved namespace to create objects
and containers that can not be directly manipulated by clients. Any
objects and bytes created in the reserved namespace will be aggregated
to the user's account totals.
When deploying internal proxys developers and operators may configure
the gatekeeper middleware to translate the X-Allow-Reserved-Names header
to the Backend header so they can manipulate the reserved namespace
directly through the normal API.
UpgradeImpact: it's not safe to rollback from this change
Change-Id: If912f71d8b0d03369680374e8233da85d8d38f85
In test_replication_servers_working, we delete a bunch of directories
without deleting hashes.pkl, then verify that nothing at that level is a
directory.
This would be trivially true except that throughout the test, we have
the replicators running constantly. However, we never verified that the
replicators actually *have* run and had a chance to re-create the
missing directories.
Now, stop the replicators before doing the deletes, run them
synchronously between doing the deletes and verifying that there are no
directories, and start them again before the final set of assertions.
Change-Id: I841f8250eb7abfb0fcdfca5c106f65e6e94dce0c
This commit replaces boolean replication_one_per_device by an integer
replication_concurrency_per_device. The new configuration parameter is
passed to utils.lock_path() which now accept as an argument a limit for
the number of locks that can be acquired for a specific path.
Instead of trying to lock path/.lock, utils.lock_path() now tries to lock
files path/.lock-X, where X is in the range (0, N), N being the limit for
the number of locks allowed for the path. The default value of limit is
set to 1.
Change-Id: I3c3193344c7a57a8a4fc7932d1b10e702efd3572
Probably the most common format for documenting arguments is reST field
lists [1]. This change updates some docstrings to comply with the field
lists syntax.
[1] http://sphinx-doc.org/domains.html#info-field-lists
Change-Id: I87e77a9bbd5bcb834b35460ce0adff5bc59d9168
Fixed the comment in the test to match exactly what's
being removed and what the expected result is.
Also, removed that extra '/' parameter which was causing
the assert to test at the wrong directory level.
Change-Id: I2f27f0d12c08375c61047a3f861c94a3dd3915c6
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Prior to the Related-Change no suffixes were written to hashes.invalid
until after initial suffix hashing created hashes.pkl - and in our probe
test the only updates to the partition occurred before replication.
Before the related change with sync_method = rsync it was possible when
starting from a clean slate to write data, and replicate from a handoff
partition without generating a hashes.invalid file in any primary.
After the Related-Change it was no longer possible to write data without
generating a hashes.invalid file; however with sync_method = rsync the
replicator could still replicate data into a partition that never
received an update directly and therefore no hashes.invalid.
When using sync_method = ssync replication updates the hashes.invalid
like any normal update to the partition and therefore all partitions
always have a hashes.invalid.
This change opts to ignores these implementation details in the probe
tests when comparing the files between synced partitions by
black-listing these known cache files and only validates that the disk
file's on disk files are in sync.
Related-Change-Id: I2b48238d9d684e831d9777a7b18f91a3cef57cd1
Change-Id: Ia9c50d7bc1a74a17c608a3c3cfb8f93196fb709d
Closes-Bug: #1663021
I changed asserts with more specific assert methods.
e.g.: from assertTrue(sth == None) to assertIsNone(*) or
assertTrue(isinstance(inst, type)) to assertIsInstace(inst, type) or
assertTrue(not sth) to assertFalse(sth).
The code gets more readable, and a better description will be shown on fail.
Change-Id: I3768faa568e3964e726ecc48ac8cb133cb088284
This patch adds the erasure code reconstructor. It follows the
design of the replicator but:
- There is no notion of update() or update_deleted().
- There is a single job processor
- Jobs are processed partition by partition.
- At the end of processing a rebalanced or handoff partition, the
reconstructor will remove successfully reverted objects if any.
And various ssync changes such as the addition of reconstruct_fa()
function called from ssync_sender which performs the actual
reconstruction while sending the object to the receiver
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com>
Co-Authored-By: Samuel Merritt <sam@swiftstack.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
blueprint ec-reconstructor
Change-Id: I7d15620dc66ee646b223bb9fff700796cd6bef51
* move get_to_final_state into ProbeTest
* get rid of kill_servers
* add replicators manager and updaters manager to ProbeTest
(this is all going someplace, i promise)
Change-Id: I8393a2ebc0d04051cae48cc3c49580f70818dbf2
* refactor probe tests to use probe.common.ProbeTest
* move reset_environment functionality to ProbeTest.setUp()
* choose rings and policies that meet the criteria - raise SkipTest if
nothing matches
* replace all AssertionErrors in setup with SkipTest
Change-Id: Id56c497d58083f5fd55f5283cdd346840df039d3
Add headers param to direct_client.direct_get_object, which is used in
probetests to passthrough the X-Storage-Policy-Index header.
DocImpact
Implements: blueprint storage-policies
Change-Id: I19adbbcefbc086c8467bd904a275d55cde596412
* Fixed issue with running probetests with the latest update
of python-swiftclient that removed eventlet
* Fixed issue with replication server tests to not require hard
coded paths
Change-Id: Ibbf727ae99c0f3893ae58e270e2f879a1f618e49
Support separate replication ip address:
- Added new function in utils. This function provides ability
to select separate IP address for replication service.
- Db_replicator and object replicators were changed.
Replication process uses new function now.
Replication network parameters:
- Replication network fields (replication_ip, replication_port)
support was added to device dictionary in swift-ring-builder script.
- Changes were made to support new fields in search, show and set_info
functions.
Implementation of replication servers:
- Separate replication servers use the same code as normal replication
servers, but with replication_server parameter = True. When using a
separate replication network, the non-replication servers set
replication_server = False. When there is no separate replication
network (the default case), replication_server is not included in the config.
DocImpact
Change-Id: Ie9af5bdcdf9241c355e36053ca4adfe49dc35bd0
Implements: blueprint dedicated-replication-network