Adds encryption middlewares.
All object servers and proxy servers should be upgraded before
introducing encryption middleware.
Encryption middleware should be first introduced with the
encryption middleware disable_encryption option set to True.
Once all proxies have encryption middleware installed this
option may be set to False (the default).
Increases constraints.py:MAX_HEADER_COUNT by 4 to allow for
headers generated by encryption-related middleware.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Christian Cachin <cca@zurich.ibm.com>
Co-Authored-By: Mahati Chamarthy <mahati.chamarthy@gmail.com>
Co-Authored-By: Peter Chng <pchng@ca.ibm.com>
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Jonathan Hinson <jlhinson@us.ibm.com>
Co-Authored-By: Hamdi Roumani <roumani@ca.ibm.com>
UpgradeImpact
Change-Id: Ie6db22697ceb1021baaa6bddcf8e41ae3acb5376
Rewrite server side copy and 'object post as copy' feature as middleware to
simplify the PUT method in the object controller code. COPY is no longer
a verb implemented as public method in Proxy application.
The server side copy middleware is inserted to the left of dlo, slo and
versioned_writes middlewares in the proxy server pipeline. As a result,
dlo and slo copy_hooks are no longer required. SLO manifests are now
validated when copied so when copying a manifest to another account the
referenced segments must be readable in that account for the manifest
copy to succeed (previously this validation was not made, meaning the
manifest was copied but could be unusable if the segments were not
readable).
With this change, there should be no change in functionality or existing
behavior. This is asserted with (almost) no changes required to existing
functional tests.
Some notes (for operators):
* Middleware required to be auto-inserted before slo and dlo and
versioned_writes
* Turning off server side copy is not configurable.
* object_post_as_copy is no longer a configurable option of proxy server
but of this middleware. However, for smooth upgrade, config option set
in proxy server app is also read.
DocImpact: Introducing server side copy as middleware
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Change-Id: Ic96a92e938589a2f6add35a40741fd062f1c29eb
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Rewrite object versioning as middleware to simplify the PUT method
in the object controller.
The functionality remains basically the
same with the only major difference being the ability to now
version slo manifest files. dlo manifests are still not
supported as part of this patch.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
DocImpact
Change-Id: Ie899290b3312e201979eafefb253d1a60b65b837
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Signed-off-by: Prashanth Pai <ppai@redhat.com>
This is a tool to help developers quantify changes to the ring
builder. It takes a scenario (JSON file) describing the builder's
basic parameters (part_power, replicas, etc.) and a number of
"rounds", where each round is a set of operations to perform on the
builder. For each round, the operations are applied, and then the
builder is rebalanced until it reaches a steady state.
The idea is that a developer observes the ring builder behaving
suboptimally, writes a scenario to reproduce the behavior, modifies
the ring builder to fix it, and references the scenario with the
commit so that others can see that things have improved.
I decided to write this after writing my fourth or fifth hacky one-off
script to reproduce some bad behavior in the ring builder.
Change-Id: I114242748368f142304aab90a6d99c1337bced4c
This patch adds the erasure code reconstructor. It follows the
design of the replicator but:
- There is no notion of update() or update_deleted().
- There is a single job processor
- Jobs are processed partition by partition.
- At the end of processing a rebalanced or handoff partition, the
reconstructor will remove successfully reverted objects if any.
And various ssync changes such as the addition of reconstruct_fa()
function called from ssync_sender which performs the actual
reconstruction while sending the object to the receiver
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com>
Co-Authored-By: Samuel Merritt <sam@swiftstack.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
blueprint ec-reconstructor
Change-Id: I7d15620dc66ee646b223bb9fff700796cd6bef51
Commit 7a192987c0 sets
up swift for translation but the compile_catalog
directory option is pointing at the wrong location
to scan for po files.
Change-Id: Id4dd24ddfde735ef8ef064882bea045361b5db90
Closes-Bug: #1367086
To start translation of swift, we need to initially import the
translation file - and place it at the proper place so that
the usual CI scripts can handle it.
The proper place is for all python projects
$PROJECT/locale/$PROJECT.pot, so move locale/$PROJECT.pot to the new
location and regenerate the file.
Update setup.cfg with the new paths.
Further imports will be done by the OpenStack Proposal bot.
Change-Id: Ide4da91f2af71db529f4a06d6b1e30ba79883506
Partial-Bug: #608725
Closes-Bug: #1082805
This daemon will take objects that are in the wrong storage policy and
move them to the right ones, or delete requests that went to the wrong
storage policy and apply them to the right ones. It operates on a
queue similar to the object-expirer's queue.
Discovering that the object is in the wrong policy will be done in
subsequent commits by the container replicator; this is the daemon
that handles them once they happen.
Like the object expirer, you only need to run one of these per cluster
see etc/container-reconciler.conf.
DocImpact
Implements: blueprint storage-policies
Change-Id: I5ea62eb77ddcbc7cfebf903429f2ee4c098771c9
The profile middleware provide a tool to profile Swift
code on the fly and collect statistic data for performance
analysis. An native simple Web UI is also provided to help
query and visualize the data.
Change-Id: I6a1554b2f8dc22e9c8cd20cff6743513eb9acc05
Implements: blueprint profiling-middleware
This is a very simple swift tool to retrieve information
of an account that is located on the storage node.
One can call the tool with a given account db file
as it is stored on the storage node system.
It will then return several information about that account.
Change-Id: Ibfeee790adc000fc177b4b3c03d22ff785fda325
This is a very simple swift tool to retrieve information
of a container that is located on the storage node.
One can call the tool with a given container db file
as it is stored on the storage node system.
It will then return several information about that container.
Change-Id: Ifebaed6c51a9ed5fbc0e7572bb43ef05d7dd254b
Just use import to make scripts available in bin/ instead of
creating these during setup.py install.
Change-Id: I7318bbb77f6564ed58736887e711e1c497873471
Add some tests for essential methods in swift-ring-builder.
Tests for removing or changing device settings are executed
with different search values to cover many possible command
line arguments.
Currently tested methods:
- create ring
- add device
- remove device
- set weight
- set info
- set min_part_hours
- set replicas
Tests use swift.common.ring.RingBuilder to verify actions.
Catching and testing output from print statements is not
tested, because this requires redirecting sys.stdout during
tests and that might have some sideeffects for testing tools.
bin/swift-ring-builder has been moved to swift/cli/ringbuilder.py
and slightly modified to work as before (mainly due to no more
existing global variables since that part of the code has been
moved inside a main() function).
Change-Id: Ia63f59a8faca1fad990784f27532ca07a2125454
This is for the same reason that SLO got pulled into middleware, which
includes stuff like automatic retry of GETs on broken connection and
the multi-ring storage policy stuff.
The proxy will automatically insert the dlo middleware at an
appropriate place in the pipeline the same way it does with the
gatekeeper middleware. Clusters will still support DLOs after upgrade
even with an old config file that doesn't mention dlo at all.
Includes support for reading config values from the proxy server's
config section so that upgraded clusters continue to work as before.
Bonus fix: resolve 'after' vs. 'after_fn' in proxy's required filters
list. Having two was confusing, so I kept the more-general one.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ib3b3830c246816dd549fc74be98b4bc651e7bace
Fix also minor bug in zone filtering when zone set to 0.
Moved bin/swift-recon to swift/cli/recon.py, which makes
it possible to import it without using some scary hacks.
bin/swift-recon is now created by setup.py install.
Closes-Bug: #1261692
Change-Id: Id0729991c8ece73604467480dbf93fec7d8eb196
Summary of the new configuration option:
The cluster operators add the container_sync middleware to their
proxy pipeline and create a container-sync-realms.conf for their
cluster and copy this out to all their proxy and container servers.
This file specifies the available container sync "realms".
A container sync realm is a group of clusters with a shared key that
have agreed to provide container syncing to one another.
The end user can then set the X-Container-Sync-To value on a
container to //realm/cluster/account/container instead of the
previously required URL.
The allowed hosts list is not used with this configuration and
instead every container sync request sent is signed using the realm
key and user key.
This offers better security as source hosts can be faked much more
easily than faking per request signatures. Replaying signed requests,
assuming it could easily be done, shouldn't be an issue as the
X-Timestamp is part of the signature and so would just short-circuit
as already current or as superceded.
This also makes configuration easier for the end user, especially
with difficult networking situations where a different host might
need to be used for the container sync daemon since it's connecting
from within a cluster. With this new configuration option, the end
user just specifies the realm and cluster names and that is resolved
to the proper endpoint configured by the operator. If the operator
changes their configuration (key or endpoint), the end user does not
need to change theirs.
DocImpact
Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37
Middleware or core features may need to store metadata
against accounts or containers. This patch adds a
generic mechanism for system metadata to be persisted
in backend databases, without polluting the user
metadata namespace, by using the reserved header
namespace x-<server_type>-sysmeta-*.
Modifications are firstly that backend servers persist
system metadata headers alongside user metadata and
other system state.
For accounts and containers, system metadata in PUT
and POST requests is treated in a similar way to user
metadata. System metadata is not yet supported for
object requests.
Secondly, changes in the proxy controllers ensure that
headers in the system metadata namespace will pass through
in requests to backend servers.
Thirdly, system metadata returned from backend servers
in GET or HEAD responses is added to the cached info
dict, which middleware can access.
Finally, a gatekeeper middleware module is provided
which filters all system metadata headers from requests
and responses by removing headers with names starting
x-account-sysmeta-, x-container-sysmeta-. The gatekeeper
also removes headers starting x-object-sysmeta- in
anticipation of future support for system metadata being
set for objects. This prevents clients from writing or
reading system metadata.
The required_filters list in swift/proxy/server.py is
modified to include the gatekeeper middleware so that
if the gatekeeper has not been configured in the
pipeline then it will be automatically inserted close
to the start of the pipeline.
blueprint cluster-federation
Change-Id: I80b8b14243cc59505f8c584920f8f527646b5f45
This will allow functional tests to be used against it, and can then
be used for the storage-policy work as an example diskfile
implementation associated with a storage policy.
Change-Id: I47a88e70cee99225779baaed379b0c5d4c73611a
pbr is the libification of what was openstack.common.setup. If provides
the build information in a delcarative form, instead of as executable python
code, which works around the chicken and egg problem of needing setup
libraries present to run setup, but needing to run setup to tell if you
need setup libraries.
One of the features that comes along with this is versioning based on
git tags. If the current revision is a signed git tag, then that is the
version of the package. If it is not, the version is equal to the most
recent git tag, plus a commit count, plus a git sha (similar to git
describe, but scrubbed for python version rules compliance)
pbr updates are also part of the upcoming automation around ensuring
global requirements stay in sync.
Closes-Bug: #1179007
Change-Id: Ia473960be7e8aa44f09d48cea72ed3c8845f82fa
The jenkins coverage jobs expect there to be a .coverage file, so deleting
it is a bad idea. Also, coverage erase will do that for us.
While we're in there, update tox.ini and setup.cfg to the latest.
Change-Id: Icd0a8fc66a5146e0d94f62a9f65a4536981d2916
* Adds tox config
- based on the config from python-quantumclient and updated for
test, pep8 and coverage execution as per nova's run_tests.sh.
* Adds nosetests defaults in setup.cfg
* Adds runtime dependencies in tools/pip-requires
- dependencies were gathered by referencing the packages used in
creation of a Swift All In One. Versions were determined by
checking the swift-core/trunk ppa or, failing that, the version
available in lucid.
* Adds test dependencies in tools/test-requires
* Updates swift/common/middleware/formpost.py for pep8 compliance
* Adds instructions for executing the tests with Tox to the
developer_guidelines
* Adds instructions for installing openstack.nose_plugin to
developer_saio
* Fixes bug 909177
Change-Id: I5407924d2181e9ab335aaf76bf30c8d40deccbb4