These files are imported (and very lightly edited) from the old
ocata user-guide. It has a few other swift-related docs that seemed
more duplacative of what we already have, but these seem to fill
existing gaps in our docs.
Change-Id: Ib00bf6992327f15f271120dc5dbc86a4a235baec
The object server can be configured to leave a certain amount of disk
space free; default is 1%. This is useful in avoiding 100%-full
filesystems, as those can get Swift in a state where the filesystem is
too full to write tombstones, so you can't delete objects to free up
space.
When a cluster has accounts/containers and objects on the same disks,
then you can wind up with a 100%-full disk since account and container
servers don't respect fallocate_reserve. This commit makes account and
container servers respect fallocate_reserve so that disks shared
between account/container and object rings won't get 100% full.
When a disk's free space falls below the configured reserve, account
and container PUT, POST, and REPLICATE requests will fail with a 507
status code. These are the operations that can significantly increase
the disk space used by a given database.
I called the parameter "fallocate_reserve" for consistency with the
object server. No actual fallocate() call happens under Swift's
control in the account or container servers (sqlite3 might make such a
call, but it's out of our hands).
Change-Id: I083442eef14bf83c0ea717b1decb3e6b56dbf1d0
Add a new middleware that can be used to fetch an encryption root
secret from a KMIP service. The middleware uses a PyKMIP client
to interact with a KMIP endpoint. The middleware is configured with
a unique identifier for the key to be fetched and options required
for the PyKMIP client.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: Ib0943fb934b347060fc66c091673a33bcfac0a6d
The use of a separate keymaster config file was previously only
described in the context of the kms_keymaster middleware. This patch
adds a section to the simple keymaster middleware docs.
Change-Id: Ifa3ad9d6e892b81c52df1f6666a9881042ac60bd
Provides a simple, experimental, CLI tool to generate a
composite ring from a list of component builder files.
For example:
swift-ring-composer <composite-file> compose \
<builder-file> <builder-file> --output <ring-file>
Commands available:
- compose: compose a list of builder file to a composite ring
- show: show the metadata for a composite ring
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Change-Id: I25a79e71c13af352e19e4358f60545265b51584f
The object updater now supports two configuration settings:
"concurrency" and "updater_workers". The latter controls how many
worker processes are spawned, while the former controls how many
concurrent container updates are performed by each worker
process. This should speed the processing of async_pendings.
There is a change to the semantics of the configuration
options. Previously, "concurrency" controlled the number of worker
processes spawned, and "updater_workers" did not exist. I switched the
meanings for consistency with other configuration options. In the
object reconstructor, object replicator, object server, object
expirer, container replicator, container server, account replicator,
account server, and account reaper, "concurrency" refers to the number
of concurrent tasks performed within one process (for reference, the
container updater and object auditor use "concurrency" to mean number
of processes).
On upgrade, a node configured with concurrency=N will still handle
async updates N-at-a-time, but will do so using only one process
instead of N.
UpgradeImpact:
If you have a config file like this:
[object-updater]
concurrency = <N>
and you want to take advantage of faster updates, then do this:
[object-updater]
concurrency = 8 # the default; you can omit this line
updater_workers = <N>
If you want updates to be processed exactly as before, do this:
[object-updater]
concurrency = 1
updater_workers = <N>
Change-Id: I17e18088e61f664e1b9942d66423666d0cae1689
This patch adds a read_only middleware to swift. It gives the ability
to make an entire cluster or individual accounts read only.
When a cluster or an account is in read only mode, requests that would
result in writes to the cluser are not allowed.
DocImpact
Change-Id: I7e0743aecd60b171bbcefcc8b6e1f3fd4cef2478
Previously, these headers had to be added by operators to their
object-server.conf when enabling swift3 middleware. Since s3api
is now imported into swift we should go ahead and add these headers
by default too.
Change-Id: Ib82e175096716e42aecdab48f01f079e09da6a1d
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Move the doc for manage-shard-ranges to the manage_shard_ranges.py module
and include it in overview_container_sharding.rst. This makes the doc for
manage-shard-ranges more obvious when viewing the code.
Change-Id: I27ca9b59897c5256dd5e2c3d4e26ff9e762b4a81
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: John Dickinson <me@not.mn>
Change-Id: I0693e54c1d7f3b77f53c3df5c616a16f74723b97
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Add a multiprocess mode to the object replicator. Setting the
"replicator_workers" setting to a positive value N will result in the
replicator using up to N worker processes to perform replication
tasks.
At most one worker per disk will be spawned, so one can set
replicator_workers=99999999 to always get one worker per disk
regardless of the number of disks in each node. This is the same
behavior that the object reconstructor has.
Worker process logs will have a bit of information prepended so
operators can tell which messages came from which worker. It looks
like this:
[worker 1/2 pid=16529] 154/154 (100.00%) partitions replicated in 1.02s (150.87/sec, 0s remaining)
The prefix is "[worker M/N pid=P] ", where M is the worker's index, N
is the total number of workers, and P is the process ID. Every message
from the replicator's logger will have the prefix; this includes
messages from down in diskfile, but does not include things printed to
stdout or stderr.
Drive-by fix: don't dump recon stats when replicating only certain
policies. When running the object replicator with replicator_workers >
0 and "--policies=X,Y,Z", the replicator would update recon stats
after running. Since it only ran on a subset of objects, it should not
update recon, much like it doesn't update recon when run with
--devices or --partitions.
Change-Id: I6802a9ad9f1f9b9dafb99d8b095af0fdbf174dc5
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Change-Id: I43bbc8b8c986e54a9a0829a0631d78d4077306f8
* Cleaned up the SDK/library links
* Added a few projects
* Fixed some existing links
* Removed some very old, unmaintained projects
Change-Id: I3effd920e978eb7af39ab27b4877a7bfc8c64b8b
TrivialFix
[1] is the installation guide for OpenStack components, obviously,
we need [1] in the docs.
[1] https://docs.openstack.org/latest/install/
Change-Id: I3c6fe7327f5552cc2b8f0f0e42b41f8e989a0a7e
Curly quotes(Chinese punctuation) usually input from Chinese input method.
When read from english context, it makes some confusion.
Change-Id: Ibd50299ee287c56ec4759ea8ff53d47d006144f8
The object updater has five different stats, but its logging only told
you two of them (successes and failures), and it only told you after
finishing all the async_pendings for a device. If you have a cluster
that's been sick and has millions upon millions of async_pendings
laying around, then your object-updaters are frustratingly
silent. I've seen one cluster with around 8 million async_pendings per
disk where the object-updaters only emitted stats every 12 hours.
Yes, if you have StatsD logging set up properly, you can go look at
your graphs and get real-time feedback on what it's doing. If you
don't have that, all you get is a frustrating silence.
Now, the object updater tells you all of its stats (successes,
failures, quarantines due to bad pickles, unlinks, and errors), and it
tells you incremental progress every five minutes. The logging at the
end of a pass remains and has been expanded to also include all stats.
Also included is a small change to what counts as an error: unmounted
drives no longer do. The goal is that only abnormal things count as
errors, like permission problems, malformed filenames, and so
on. These are things that should never happen, but if they do, may
require operator intervention. Drives fail, so logging an error upon
encountering an unmounted drive is not useful.
Change-Id: Idbddd507f0b633d14dffb7a9834fce93a10359ab