2013-09-20 01:00:54 +08:00
|
|
|
# Copyright (c) 2010-2012 OpenStack Foundation
|
2010-07-12 17:03:45 -05:00
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
|
|
# implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
|
|
|
|
import unittest
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
import json
|
2013-06-25 15:16:35 -04:00
|
|
|
import mock
|
2010-12-16 16:20:57 -08:00
|
|
|
import os
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
import signal
|
2014-02-24 11:24:56 +00:00
|
|
|
import string
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
import sys
|
|
|
|
import time
|
2017-10-19 17:13:00 -07:00
|
|
|
import xattr
|
2010-12-16 16:20:57 -08:00
|
|
|
from shutil import rmtree
|
2011-01-19 14:18:37 -06:00
|
|
|
from tempfile import mkdtemp
|
2016-03-15 17:09:21 -07:00
|
|
|
import textwrap
|
2016-09-22 16:56:36 +01:00
|
|
|
from os.path import dirname, basename
|
2022-07-26 15:11:43 -07:00
|
|
|
|
|
|
|
from test import BaseTestCase
|
2021-01-22 14:21:23 -06:00
|
|
|
from test.debug_logger import debug_logger
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
from test.unit import (
|
2021-01-22 14:21:23 -06:00
|
|
|
DEFAULT_TEST_EC_TYPE, make_timestamp_iter, patch_policies,
|
|
|
|
skip_if_no_xattrs)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
from test.unit.obj.common import write_diskfile
|
2016-03-15 17:09:21 -07:00
|
|
|
from swift.obj import auditor, replicator
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
from swift.obj.watchers.dark_data import DarkDataWatcher
|
2016-03-15 17:09:21 -07:00
|
|
|
from swift.obj.diskfile import (
|
|
|
|
DiskFile, write_metadata, invalidate_hash, get_data_dir,
|
|
|
|
DiskFileManager, ECDiskFileManager, AuditLocation, clear_auditor_status,
|
2016-07-25 20:10:44 +05:30
|
|
|
get_auditor_status, HASH_FILE, HASH_INVALIDATIONS_FILE)
|
2021-05-21 21:06:49 -05:00
|
|
|
from swift.common.exceptions import ClientException
|
2016-03-15 17:09:21 -07:00
|
|
|
from swift.common.utils import (
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
mkdirs, normalize_timestamp, Timestamp, readconf, md5, PrefixLoggerAdapter)
|
2016-03-15 17:09:21 -07:00
|
|
|
from swift.common.storage_policy import (
|
2016-10-17 20:38:52 +01:00
|
|
|
ECStoragePolicy, StoragePolicy, POLICIES, EC_POLICY)
|
2010-12-28 14:54:00 -08:00
|
|
|
|
2016-01-12 14:18:30 -08:00
|
|
|
_mocked_policies = [
|
|
|
|
StoragePolicy(0, 'zero', False),
|
|
|
|
StoragePolicy(1, 'one', True),
|
|
|
|
ECStoragePolicy(2, 'two', ec_type=DEFAULT_TEST_EC_TYPE,
|
|
|
|
ec_ndata=2, ec_nparity=1, ec_segment_size=4096),
|
|
|
|
]
|
2014-03-17 18:38:21 -07:00
|
|
|
|
|
|
|
|
2015-08-24 15:41:23 -07:00
|
|
|
def works_only_once(callable_thing, exception):
|
|
|
|
called = [False]
|
|
|
|
|
|
|
|
def only_once(*a, **kw):
|
|
|
|
if called[0]:
|
|
|
|
raise exception
|
|
|
|
else:
|
|
|
|
called[0] = True
|
|
|
|
return callable_thing(*a, **kw)
|
|
|
|
|
|
|
|
return only_once
|
|
|
|
|
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
class FakeRing1(object):
|
|
|
|
|
|
|
|
def __init__(self, swift_dir, ring_name=None):
|
|
|
|
return
|
|
|
|
|
|
|
|
def get_nodes(self, *args, **kwargs):
|
|
|
|
x = 1
|
|
|
|
node1 = {'ip': '10.0.0.%s' % x,
|
|
|
|
'replication_ip': '10.0.0.%s' % x,
|
|
|
|
'port': 6200 + x,
|
|
|
|
'replication_port': 6200 + x,
|
|
|
|
'device': 'sda',
|
|
|
|
'zone': x % 3,
|
|
|
|
'region': x % 2,
|
|
|
|
'id': x,
|
|
|
|
'handoff_index': 1}
|
|
|
|
return (1, [node1])
|
|
|
|
|
|
|
|
|
2021-05-21 21:06:49 -05:00
|
|
|
class FakeRing2(object):
|
|
|
|
|
|
|
|
def __init__(self, swift_dir, ring_name=None):
|
|
|
|
return
|
|
|
|
|
|
|
|
def get_nodes(self, *args, **kwargs):
|
|
|
|
nodes = []
|
|
|
|
for x in [1, 2]:
|
|
|
|
nodes.append({'ip': '10.0.0.%s' % x,
|
|
|
|
'replication_ip': '10.0.0.%s' % x,
|
|
|
|
'port': 6200 + x,
|
|
|
|
'replication_port': 6200 + x,
|
|
|
|
'device': 'sda',
|
|
|
|
'zone': x % 3,
|
|
|
|
'region': x % 2,
|
|
|
|
'id': x,
|
|
|
|
'handoff_index': 1})
|
|
|
|
return (1, nodes)
|
|
|
|
|
|
|
|
|
2022-07-26 15:11:43 -07:00
|
|
|
class TestAuditorBase(BaseTestCase):
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2010-12-16 16:20:57 -08:00
|
|
|
def setUp(self):
|
Add checksum to object extended attributes
Currently, our integrity checking for objects is pretty weak when it
comes to object metadata. If the extended attributes on a .data or
.meta file get corrupted in such a way that we can still unpickle it,
we don't have anything that detects that.
This could be especially bad with encrypted etags; if the encrypted
etag (X-Object-Sysmeta-Crypto-Etag or whatever it is) gets some bits
flipped, then we'll cheerfully decrypt the cipherjunk into plainjunk,
then send it to the client. Net effect is that the client sees a GET
response with an ETag that doesn't match the MD5 of the object *and*
Swift has no way of detecting and quarantining this object.
Note that, with an unencrypted object, if the ETag metadatum gets
mangled, then the object will be quarantined by the object server or
auditor, whichever notices first.
As part of this commit, I also ripped out some mocking of
getxattr/setxattr in tests. It appears to be there to allow unit tests
to run on systems where /tmp doesn't support xattrs. However, since
the mock is keyed off of inode number and inode numbers get re-used,
there's lots of leakage between different test runs. On a real FS,
unlinking a file and then creating a new one of the same name will
also reset the xattrs; this isn't the case with the mock.
The mock was pretty old; Ubuntu 12.04 and up all support xattrs in
/tmp, and recent Red Hat / CentOS releases do too. The xattr mock was
added in 2011; maybe it was to support Ubuntu Lucid Lynx?
Bonus: now you can pause a test with the debugger, inspect its files
in /tmp, and actually see the xattrs along with the data.
Since this patch now uses a real filesystem for testing filesystem
operations, tests are skipped if the underlying filesystem does not
support setting xattrs (eg tmpfs or more than 4k of xattrs on ext4).
References to "/tmp" have been replaced with calls to
tempfile.gettempdir(). This will allow setting the TMPDIR envvar in
test setup and getting an XFS filesystem instead of ext4 or tmpfs.
THIS PATCH SIGNIFICANTLY CHANGES TESTING ENVIRONMENTS
With this patch, every test environment will require TMPDIR to be
using a filesystem that supports at least 4k of extended attributes.
Neither ext4 nor tempfs support this. XFS is recommended.
So why all the SkipTests? Why not simply raise an error? We still need
the tests to run on the base image for OpenStack's CI system. Since
we were previously mocking out xattr, there wasn't a problem, but we
also weren't actually testing anything. This patch adds functionality
to validate xattr data, so we need to drop the mock.
`test.unit.skip_if_no_xattrs()` is also imported into `test.functional`
so that functional tests can import it from the functional test
namespace.
The related OpenStack CI infrastructure changes are made in
https://review.openstack.org/#/c/394600/.
Co-Authored-By: John Dickinson <me@not.mn>
Change-Id: I98a37c0d451f4960b7a12f648e4405c6c6716808
2016-06-30 16:52:58 -07:00
|
|
|
skip_if_no_xattrs()
|
2011-03-15 22:12:03 -07:00
|
|
|
self.testdir = os.path.join(mkdtemp(), 'tmp_test_object_auditor')
|
2010-12-16 16:20:57 -08:00
|
|
|
self.devices = os.path.join(self.testdir, 'node')
|
2014-02-24 11:24:56 +00:00
|
|
|
self.rcache = os.path.join(self.testdir, 'object.recon')
|
2017-09-01 14:15:45 -07:00
|
|
|
self.logger = debug_logger()
|
2010-12-16 16:20:57 -08:00
|
|
|
rmtree(self.testdir, ignore_errors=1)
|
2011-03-15 22:12:03 -07:00
|
|
|
mkdirs(os.path.join(self.devices, 'sda'))
|
2010-12-17 00:27:08 -08:00
|
|
|
os.mkdir(os.path.join(self.devices, 'sdb'))
|
|
|
|
|
2014-03-17 18:38:21 -07:00
|
|
|
# policy 0
|
2015-03-17 08:32:57 +00:00
|
|
|
self.objects = os.path.join(self.devices, 'sda',
|
|
|
|
get_data_dir(POLICIES[0]))
|
|
|
|
self.objects_2 = os.path.join(self.devices, 'sdb',
|
|
|
|
get_data_dir(POLICIES[0]))
|
2010-12-16 16:20:57 -08:00
|
|
|
os.mkdir(self.objects)
|
2014-03-17 18:38:21 -07:00
|
|
|
# policy 1
|
2015-03-17 08:32:57 +00:00
|
|
|
self.objects_p1 = os.path.join(self.devices, 'sda',
|
|
|
|
get_data_dir(POLICIES[1]))
|
|
|
|
self.objects_2_p1 = os.path.join(self.devices, 'sdb',
|
|
|
|
get_data_dir(POLICIES[1]))
|
2014-03-17 18:38:21 -07:00
|
|
|
os.mkdir(self.objects_p1)
|
2016-01-12 14:18:30 -08:00
|
|
|
# policy 2
|
|
|
|
self.objects_p2 = os.path.join(self.devices, 'sda',
|
|
|
|
get_data_dir(POLICIES[2]))
|
|
|
|
self.objects_2_p2 = os.path.join(self.devices, 'sdb',
|
|
|
|
get_data_dir(POLICIES[2]))
|
|
|
|
os.mkdir(self.objects_p2)
|
|
|
|
|
|
|
|
self.parts = {}
|
|
|
|
self.parts_p1 = {}
|
|
|
|
self.parts_p2 = {}
|
2010-12-16 16:20:57 -08:00
|
|
|
for part in ['0', '1', '2', '3']:
|
|
|
|
self.parts[part] = os.path.join(self.objects, part)
|
2014-03-17 18:38:21 -07:00
|
|
|
self.parts_p1[part] = os.path.join(self.objects_p1, part)
|
2016-01-12 14:18:30 -08:00
|
|
|
self.parts_p2[part] = os.path.join(self.objects_p2, part)
|
2010-12-16 16:20:57 -08:00
|
|
|
os.mkdir(os.path.join(self.objects, part))
|
2014-03-17 18:38:21 -07:00
|
|
|
os.mkdir(os.path.join(self.objects_p1, part))
|
2016-01-12 14:18:30 -08:00
|
|
|
os.mkdir(os.path.join(self.objects_p2, part))
|
2010-12-16 16:20:57 -08:00
|
|
|
|
|
|
|
self.conf = dict(
|
|
|
|
devices=self.devices,
|
2013-07-01 14:58:35 -07:00
|
|
|
mount_check='false',
|
|
|
|
object_size_stats='10,100,1024,10240')
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
self.df_mgr = DiskFileManager(self.conf, self.logger)
|
2016-01-12 14:18:30 -08:00
|
|
|
self.ec_df_mgr = ECDiskFileManager(self.conf, self.logger)
|
2014-03-17 18:38:21 -07:00
|
|
|
|
2016-01-12 14:18:30 -08:00
|
|
|
# diskfiles for policy 0, 1, 2
|
2015-03-17 08:32:57 +00:00
|
|
|
self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o',
|
|
|
|
policy=POLICIES[0])
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
self.disk_file_p1 = self.df_mgr.get_diskfile('sda', '0', 'a', 'c2',
|
2015-03-17 08:32:57 +00:00
|
|
|
'o', policy=POLICIES[1])
|
2016-01-12 14:18:30 -08:00
|
|
|
self.disk_file_ec = self.ec_df_mgr.get_diskfile(
|
2021-04-27 22:15:56 -05:00
|
|
|
'sda', '0', 'a', 'c_ec', 'o', policy=POLICIES[2], frag_index=1)
|
2010-12-16 16:20:57 -08:00
|
|
|
|
|
|
|
def tearDown(self):
|
2011-01-24 17:12:38 -08:00
|
|
|
rmtree(os.path.dirname(self.testdir), ignore_errors=1)
|
2010-12-16 16:20:57 -08:00
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
|
|
|
@patch_policies(_mocked_policies)
|
|
|
|
class TestAuditor(TestAuditorBase):
|
|
|
|
|
2014-07-10 06:21:56 -07:00
|
|
|
def test_worker_conf_parms(self):
|
|
|
|
def check_common_defaults():
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.max_bytes_per_second, 10000000)
|
|
|
|
self.assertEqual(auditor_worker.log_time, 3600)
|
2014-07-10 06:21:56 -07:00
|
|
|
|
|
|
|
# test default values
|
|
|
|
conf = dict(
|
|
|
|
devices=self.devices,
|
|
|
|
mount_check='false',
|
|
|
|
object_size_stats='10,100,1024,10240')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
check_common_defaults()
|
2016-01-12 14:18:30 -08:00
|
|
|
for policy in POLICIES:
|
|
|
|
mgr = auditor_worker.diskfile_router[policy]
|
|
|
|
self.assertEqual(mgr.disk_chunk_size, 65536)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.max_files_per_second, 20)
|
|
|
|
self.assertEqual(auditor_worker.zero_byte_only_at_fps, 0)
|
2014-07-10 06:21:56 -07:00
|
|
|
|
|
|
|
# test specified audit value overrides
|
|
|
|
conf.update({'disk_chunk_size': 4096})
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices,
|
|
|
|
zero_byte_only_at_fps=50)
|
|
|
|
check_common_defaults()
|
2016-01-12 14:18:30 -08:00
|
|
|
for policy in POLICIES:
|
|
|
|
mgr = auditor_worker.diskfile_router[policy]
|
|
|
|
self.assertEqual(mgr.disk_chunk_size, 4096)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.max_files_per_second, 50)
|
|
|
|
self.assertEqual(auditor_worker.zero_byte_only_at_fps, 50)
|
2014-07-10 06:21:56 -07:00
|
|
|
|
2010-12-17 00:27:08 -08:00
|
|
|
def test_object_audit_extra_data(self):
|
2014-03-17 18:38:21 -07:00
|
|
|
def run_tests(disk_file):
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2016-10-17 20:38:52 +01:00
|
|
|
if disk_file.policy.policy_type == EC_POLICY:
|
|
|
|
data = disk_file.policy.pyeclib_driver.encode(data)[0]
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2014-03-17 18:38:21 -07:00
|
|
|
with disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
2022-08-16 09:19:55 -07:00
|
|
|
if disk_file.policy.policy_type == EC_POLICY:
|
|
|
|
metadata.update({
|
|
|
|
'X-Object-Sysmeta-Ec-Frag-Index': '1',
|
|
|
|
'X-Object-Sysmeta-Ec-Etag': 'fake-etag',
|
|
|
|
})
|
2014-03-17 18:38:21 -07:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2014-03-17 18:38:21 -07:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
2010-12-16 16:20:57 -08:00
|
|
|
|
2014-03-17 18:38:21 -07:00
|
|
|
auditor_worker.object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(disk_file._datadir, 'sda', '0',
|
2016-01-12 14:18:30 -08:00
|
|
|
policy=disk_file.policy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines)
|
2010-12-16 16:20:57 -08:00
|
|
|
|
2016-07-26 12:36:50 +02:00
|
|
|
os.write(writer._fd, b'extra_data')
|
2014-03-17 18:38:21 -07:00
|
|
|
|
|
|
|
auditor_worker.object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(disk_file._datadir, 'sda', '0',
|
2016-01-12 14:18:30 -08:00
|
|
|
policy=disk_file.policy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines,
|
|
|
|
pre_quarantines + 1)
|
2014-03-17 18:38:21 -07:00
|
|
|
run_tests(self.disk_file)
|
|
|
|
run_tests(self.disk_file_p1)
|
2016-01-12 14:18:30 -08:00
|
|
|
run_tests(self.disk_file_ec)
|
2010-12-16 16:20:57 -08:00
|
|
|
|
2017-10-19 17:13:00 -07:00
|
|
|
def test_object_audit_adds_metadata_checksums(self):
|
|
|
|
disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o-md',
|
|
|
|
policy=POLICIES.legacy)
|
|
|
|
|
|
|
|
# simulate a PUT
|
|
|
|
now = time.time()
|
|
|
|
data = b'boots and cats and ' * 1024
|
2020-09-11 16:28:11 -04:00
|
|
|
hasher = md5(usedforsecurity=False)
|
2017-10-19 17:13:00 -07:00
|
|
|
with disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
hasher.update(data)
|
|
|
|
etag = hasher.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': str(normalize_timestamp(now)),
|
|
|
|
'Content-Length': len(data),
|
|
|
|
'Content-Type': 'the old type',
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(Timestamp(now))
|
|
|
|
|
|
|
|
# simulate a subsequent POST
|
|
|
|
post_metadata = metadata.copy()
|
|
|
|
post_metadata['Content-Type'] = 'the new type'
|
|
|
|
post_metadata['X-Object-Meta-Biff'] = 'buff'
|
|
|
|
post_metadata['X-Timestamp'] = str(normalize_timestamp(now + 1))
|
|
|
|
disk_file.write_metadata(post_metadata)
|
|
|
|
|
|
|
|
file_paths = [os.path.join(disk_file._datadir, fname)
|
|
|
|
for fname in os.listdir(disk_file._datadir)
|
|
|
|
if fname not in ('.', '..')]
|
|
|
|
file_paths.sort()
|
|
|
|
|
|
|
|
# sanity check: make sure we have a .data and a .meta file
|
|
|
|
self.assertEqual(len(file_paths), 2)
|
|
|
|
self.assertTrue(file_paths[0].endswith(".data"))
|
|
|
|
self.assertTrue(file_paths[1].endswith(".meta"))
|
|
|
|
|
|
|
|
# Go remove the xattr "user.swift.metadata_checksum" as if this
|
|
|
|
# object were written before Swift supported metadata checksums.
|
|
|
|
for file_path in file_paths:
|
|
|
|
xattr.removexattr(file_path, "user.swift.metadata_checksum")
|
|
|
|
|
|
|
|
# Run the auditor...
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
auditor_worker.object_audit(
|
|
|
|
AuditLocation(disk_file._datadir, 'sda', '0',
|
|
|
|
policy=disk_file.policy))
|
|
|
|
self.assertEqual(auditor_worker.quarantines, 0) # sanity
|
|
|
|
|
|
|
|
# ...and the checksums are back
|
|
|
|
for file_path in file_paths:
|
|
|
|
metadata = xattr.getxattr(file_path, "user.swift.metadata")
|
|
|
|
i = 1
|
|
|
|
while True:
|
|
|
|
try:
|
|
|
|
metadata += xattr.getxattr(
|
|
|
|
file_path, "user.swift.metadata%d" % i)
|
|
|
|
i += 1
|
|
|
|
except (IOError, OSError):
|
|
|
|
break
|
|
|
|
|
|
|
|
checksum = xattr.getxattr(
|
|
|
|
file_path, "user.swift.metadata_checksum")
|
|
|
|
|
2020-09-11 16:28:11 -04:00
|
|
|
self.assertEqual(
|
|
|
|
checksum,
|
|
|
|
(md5(metadata, usedforsecurity=False).hexdigest()
|
|
|
|
.encode('ascii')))
|
2017-10-19 17:13:00 -07:00
|
|
|
|
2010-12-17 00:27:08 -08:00
|
|
|
def test_object_audit_diff_data(self):
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2010-12-17 00:27:08 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
2010-12-17 00:27:08 -08:00
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2010-12-17 00:27:08 -08:00
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
2010-12-17 00:27:08 -08:00
|
|
|
|
2013-04-18 20:42:36 -04:00
|
|
|
# remake so it will have metadata
|
2015-03-17 08:32:57 +00:00
|
|
|
self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o',
|
|
|
|
policy=POLICIES.legacy)
|
2010-12-16 16:20:57 -08:00
|
|
|
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(self.disk_file._datadir, 'sda', '0',
|
|
|
|
policy=POLICIES.legacy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines)
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(b'1' + b'0' * 1023, usedforsecurity=False).hexdigest()
|
2013-04-18 20:42:36 -04:00
|
|
|
metadata['ETag'] = etag
|
|
|
|
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2013-04-18 20:42:36 -04:00
|
|
|
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(self.disk_file._datadir, 'sda', '0',
|
|
|
|
policy=POLICIES.legacy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2016-10-17 20:38:52 +01:00
|
|
|
def test_object_audit_checks_EC_fragments(self):
|
|
|
|
disk_file = self.disk_file_ec
|
|
|
|
|
|
|
|
def do_test(data):
|
|
|
|
# create diskfile and set ETag and content-length to match the data
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(data, usedforsecurity=False).hexdigest()
|
2016-10-17 20:38:52 +01:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
|
|
|
with disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
|
|
|
'Content-Length': len(data),
|
2022-08-16 09:19:55 -07:00
|
|
|
'X-Object-Sysmeta-Ec-Frag-Index': '1',
|
|
|
|
'X-Object-Sysmeta-Ec-Etag': 'fake-etag',
|
2016-10-17 20:38:52 +01:00
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(Timestamp(timestamp))
|
|
|
|
|
2017-09-01 14:15:45 -07:00
|
|
|
self.logger.clear()
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
2016-10-17 20:38:52 +01:00
|
|
|
self.rcache, self.devices)
|
|
|
|
self.assertEqual(0, auditor_worker.quarantines) # sanity check
|
|
|
|
auditor_worker.object_audit(
|
|
|
|
AuditLocation(disk_file._datadir, 'sda', '0',
|
|
|
|
policy=disk_file.policy))
|
|
|
|
return auditor_worker
|
|
|
|
|
|
|
|
# two good frags in an EC archive
|
|
|
|
frag_0 = disk_file.policy.pyeclib_driver.encode(
|
2019-02-25 13:42:37 -08:00
|
|
|
b'x' * disk_file.policy.ec_segment_size)[0]
|
2016-10-17 20:38:52 +01:00
|
|
|
frag_1 = disk_file.policy.pyeclib_driver.encode(
|
2019-02-25 13:42:37 -08:00
|
|
|
b'y' * disk_file.policy.ec_segment_size)[0]
|
2016-10-17 20:38:52 +01:00
|
|
|
data = frag_0 + frag_1
|
|
|
|
auditor_worker = do_test(data)
|
|
|
|
self.assertEqual(0, auditor_worker.quarantines)
|
|
|
|
self.assertFalse(auditor_worker.logger.get_lines_for_level('error'))
|
|
|
|
|
|
|
|
# corrupt second frag headers
|
2019-02-25 13:42:37 -08:00
|
|
|
corrupt_frag_1 = b'blah' * 16 + frag_1[64:]
|
2016-10-17 20:38:52 +01:00
|
|
|
data = frag_0 + corrupt_frag_1
|
|
|
|
auditor_worker = do_test(data)
|
|
|
|
self.assertEqual(1, auditor_worker.quarantines)
|
|
|
|
log_lines = auditor_worker.logger.get_lines_for_level('error')
|
|
|
|
self.assertIn('failed audit and was quarantined: '
|
|
|
|
'Invalid EC metadata at offset 0x%x' %
|
|
|
|
len(frag_0),
|
|
|
|
log_lines[0])
|
|
|
|
|
|
|
|
# dangling extra corrupt frag data
|
2019-02-25 13:42:37 -08:00
|
|
|
data = frag_0 + frag_1 + b'wtf' * 100
|
2016-10-17 20:38:52 +01:00
|
|
|
auditor_worker = do_test(data)
|
|
|
|
self.assertEqual(1, auditor_worker.quarantines)
|
|
|
|
log_lines = auditor_worker.logger.get_lines_for_level('error')
|
|
|
|
self.assertIn('failed audit and was quarantined: '
|
|
|
|
'Invalid EC metadata at offset 0x%x' %
|
|
|
|
len(frag_0 + frag_1),
|
|
|
|
log_lines[0])
|
|
|
|
|
|
|
|
# simulate bug https://bugs.launchpad.net/bugs/1631144 by writing start
|
|
|
|
# of an ssync subrequest into the diskfile
|
|
|
|
data = (
|
|
|
|
b'PUT /a/c/o\r\n' +
|
|
|
|
b'Content-Length: 999\r\n' +
|
|
|
|
b'Content-Type: image/jpeg\r\n' +
|
|
|
|
b'X-Object-Sysmeta-Ec-Content-Length: 1024\r\n' +
|
|
|
|
b'X-Object-Sysmeta-Ec-Etag: 1234bff7eb767cc6d19627c6b6f9edef\r\n' +
|
|
|
|
b'X-Object-Sysmeta-Ec-Frag-Index: 1\r\n' +
|
2019-02-25 13:42:37 -08:00
|
|
|
b'X-Object-Sysmeta-Ec-Scheme: ' +
|
|
|
|
DEFAULT_TEST_EC_TYPE.encode('ascii') + b'\r\n' +
|
2016-10-17 20:38:52 +01:00
|
|
|
b'X-Object-Sysmeta-Ec-Segment-Size: 1048576\r\n' +
|
|
|
|
b'X-Timestamp: 1471512345.17333\r\n\r\n'
|
|
|
|
)
|
|
|
|
data += frag_0[:disk_file.policy.fragment_size - len(data)]
|
|
|
|
auditor_worker = do_test(data)
|
|
|
|
self.assertEqual(1, auditor_worker.quarantines)
|
|
|
|
log_lines = auditor_worker.logger.get_lines_for_level('error')
|
|
|
|
self.assertIn('failed audit and was quarantined: '
|
|
|
|
'Invalid EC metadata at offset 0x0',
|
|
|
|
log_lines[0])
|
|
|
|
|
2010-12-28 14:54:00 -08:00
|
|
|
def test_object_audit_no_meta(self):
|
2011-01-24 17:12:38 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
path = os.path.join(self.disk_file._datadir, timestamp + '.data')
|
|
|
|
mkdirs(self.disk_file._datadir)
|
2016-07-26 12:36:50 +02:00
|
|
|
fp = open(path, 'wb')
|
|
|
|
fp.write(b'0' * 1024)
|
2011-01-24 17:12:38 -08:00
|
|
|
fp.close()
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
invalidate_hash(os.path.dirname(self.disk_file._datadir))
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
|
|
|
auditor_worker.object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(self.disk_file._datadir, 'sda', '0',
|
|
|
|
policy=POLICIES.legacy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1)
|
2010-12-28 14:54:00 -08:00
|
|
|
|
2013-09-11 22:42:19 -07:00
|
|
|
def test_object_audit_will_not_swallow_errors_in_tests(self):
|
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
path = os.path.join(self.disk_file._datadir, timestamp + '.data')
|
|
|
|
mkdirs(self.disk_file._datadir)
|
2013-09-11 22:42:19 -07:00
|
|
|
with open(path, 'w') as f:
|
|
|
|
write_metadata(f, {'name': '/a/c/o'})
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2013-09-11 22:42:19 -07:00
|
|
|
|
|
|
|
def blowup(*args):
|
|
|
|
raise NameError('tpyo')
|
Alternate DiskFile constructor for efficient auditing.
Before, to audit an object, the auditor:
- calls listdir(object-hash-dir)
- picks out the .data file from the listing
- pulls out all N of its user.swift.metadata* xattrs
- unpickles them
- pulls out the value for 'name'
- splits the name into a/c/o
- then instantiates and opens a DiskFile(a, c, o),
which does the following
- joins a/c/o back into a name
- hashes the name
- calls listdir(object-hash-dir) (AGAIN)
- picks out the .data file (and maybe .meta) from the listing (AGAIN)
- pulls out all N of its user.swift.metadata* xattrs (AGAIN)
- unpickles them (AGAIN)
- starts reading object's contents off disk
Now, the auditor simply locates the hash dir on the filesystem (saving
one listdir) and then hands it off to
DiskFileManager.get_diskfile_from_audit_location, which then
instantiates a DiskFile in a way that lazy-loads the name later
(saving one xattr reading).
As part of this, DiskFile.open() will now quarantine a hash
"directory" that's actually a file. Before, the audit location
generator would skip those, but now they make it clear into
DiskFile(). It's better to quarantine them anyway, as they're not
doing any good the way they are.
Also, removed the was_quarantined attribute on DiskFileReader. Now you
can pass in a quarantine_hook callable to DiskFile.reader() that gets
called if the file was quarantined. Default is to log quarantines, but
otherwise do nothing.
Change-Id: I04fc14569982a17fcc89e00832725ae71009335a
2013-10-28 14:57:18 -07:00
|
|
|
with mock.patch.object(DiskFileManager,
|
|
|
|
'get_diskfile_from_audit_location', blowup):
|
2013-09-13 13:55:10 -06:00
|
|
|
self.assertRaises(NameError, auditor_worker.object_audit,
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(os.path.dirname(path), 'sda', '0',
|
|
|
|
policy=POLICIES.legacy))
|
2013-09-11 22:42:19 -07:00
|
|
|
|
|
|
|
def test_failsafe_object_audit_will_swallow_errors_in_tests(self):
|
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
path = os.path.join(self.disk_file._datadir, timestamp + '.data')
|
|
|
|
mkdirs(self.disk_file._datadir)
|
2013-09-11 22:42:19 -07:00
|
|
|
with open(path, 'w') as f:
|
|
|
|
write_metadata(f, {'name': '/a/c/o'})
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2013-09-11 22:42:19 -07:00
|
|
|
|
|
|
|
def blowup(*args):
|
|
|
|
raise NameError('tpyo')
|
2015-03-17 08:32:57 +00:00
|
|
|
with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls',
|
|
|
|
blowup):
|
Alternate DiskFile constructor for efficient auditing.
Before, to audit an object, the auditor:
- calls listdir(object-hash-dir)
- picks out the .data file from the listing
- pulls out all N of its user.swift.metadata* xattrs
- unpickles them
- pulls out the value for 'name'
- splits the name into a/c/o
- then instantiates and opens a DiskFile(a, c, o),
which does the following
- joins a/c/o back into a name
- hashes the name
- calls listdir(object-hash-dir) (AGAIN)
- picks out the .data file (and maybe .meta) from the listing (AGAIN)
- pulls out all N of its user.swift.metadata* xattrs (AGAIN)
- unpickles them (AGAIN)
- starts reading object's contents off disk
Now, the auditor simply locates the hash dir on the filesystem (saving
one listdir) and then hands it off to
DiskFileManager.get_diskfile_from_audit_location, which then
instantiates a DiskFile in a way that lazy-loads the name later
(saving one xattr reading).
As part of this, DiskFile.open() will now quarantine a hash
"directory" that's actually a file. Before, the audit location
generator would skip those, but now they make it clear into
DiskFile(). It's better to quarantine them anyway, as they're not
doing any good the way they are.
Also, removed the was_quarantined attribute on DiskFileReader. Now you
can pass in a quarantine_hook callable to DiskFile.reader() that gets
called if the file was quarantined. Default is to log quarantines, but
otherwise do nothing.
Change-Id: I04fc14569982a17fcc89e00832725ae71009335a
2013-10-28 14:57:18 -07:00
|
|
|
auditor_worker.failsafe_object_audit(
|
2015-03-17 08:32:57 +00:00
|
|
|
AuditLocation(os.path.dirname(path), 'sda', '0',
|
|
|
|
policy=POLICIES.legacy))
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.errors, 1)
|
2013-09-11 22:42:19 -07:00
|
|
|
|
2016-03-15 17:09:21 -07:00
|
|
|
def test_audit_location_gets_quarantined(self):
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
|
|
|
|
location = AuditLocation(self.disk_file._datadir, 'sda', '0',
|
|
|
|
policy=self.disk_file.policy)
|
|
|
|
|
|
|
|
# instead of a datadir, we'll make a file!
|
|
|
|
mkdirs(os.path.dirname(self.disk_file._datadir))
|
|
|
|
open(self.disk_file._datadir, 'w')
|
|
|
|
|
|
|
|
# after we turn the crank ...
|
|
|
|
auditor_worker.object_audit(location)
|
|
|
|
|
|
|
|
# ... it should get quarantined
|
|
|
|
self.assertFalse(os.path.exists(self.disk_file._datadir))
|
|
|
|
self.assertEqual(1, auditor_worker.quarantines)
|
|
|
|
|
|
|
|
def test_rsync_tempfile_timeout_auto_option(self):
|
|
|
|
# if we don't have access to the replicator config section we'll use
|
|
|
|
# our default
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400)
|
|
|
|
# if the rsync_tempfile_timeout option is set explicitly we use that
|
|
|
|
self.conf['rsync_tempfile_timeout'] = '1800'
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 1800)
|
|
|
|
# if we have a real config we can be a little smarter
|
|
|
|
config_path = os.path.join(self.testdir, 'objserver.conf')
|
|
|
|
stub_config = """
|
|
|
|
[object-auditor]
|
|
|
|
rsync_tempfile_timeout = auto
|
|
|
|
"""
|
|
|
|
with open(config_path, 'w') as f:
|
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
# the Daemon loader will hand the object-auditor config to the
|
|
|
|
# auditor who will build the workers from it
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
# if there is no object-replicator section we still have to fall back
|
|
|
|
# to default because we can't parse the config for that section!
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400)
|
|
|
|
stub_config = """
|
|
|
|
[object-replicator]
|
|
|
|
[object-auditor]
|
|
|
|
rsync_tempfile_timeout = auto
|
|
|
|
"""
|
2016-07-25 20:10:44 +05:30
|
|
|
with open(config_path, 'w') as f:
|
2016-03-15 17:09:21 -07:00
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
# if the object-replicator section will parse but does not override
|
|
|
|
# the default rsync_timeout we assume the default rsync_timeout value
|
|
|
|
# and add 15mins
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout,
|
|
|
|
replicator.DEFAULT_RSYNC_TIMEOUT + 900)
|
|
|
|
stub_config = """
|
|
|
|
[DEFAULT]
|
|
|
|
reclaim_age = 1209600
|
|
|
|
[object-replicator]
|
|
|
|
rsync_timeout = 3600
|
|
|
|
[object-auditor]
|
|
|
|
rsync_tempfile_timeout = auto
|
|
|
|
"""
|
2016-07-25 20:10:44 +05:30
|
|
|
with open(config_path, 'w') as f:
|
2016-03-15 17:09:21 -07:00
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
# if there is an object-replicator section with a rsync_timeout
|
|
|
|
# configured we'll use that value (3600) + 900
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 3600 + 900)
|
|
|
|
|
|
|
|
def test_inprogress_rsync_tempfiles_get_cleaned_up(self):
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
|
|
|
|
location = AuditLocation(self.disk_file._datadir, 'sda', '0',
|
|
|
|
policy=self.disk_file.policy)
|
|
|
|
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'VERIFY'
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2016-03-15 17:09:21 -07:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
|
|
|
with self.disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag.hexdigest(),
|
|
|
|
'X-Timestamp': timestamp,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(Timestamp(timestamp))
|
|
|
|
|
|
|
|
datafilename = None
|
|
|
|
datadir_files = os.listdir(self.disk_file._datadir)
|
|
|
|
for filename in datadir_files:
|
|
|
|
if filename.endswith('.data'):
|
|
|
|
datafilename = filename
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
self.fail('Did not find .data file in %r: %r' %
|
|
|
|
(self.disk_file._datadir, datadir_files))
|
|
|
|
rsynctempfile_path = os.path.join(self.disk_file._datadir,
|
|
|
|
'.%s.9ILVBL' % datafilename)
|
|
|
|
open(rsynctempfile_path, 'w')
|
|
|
|
# sanity check we have an extra file
|
|
|
|
rsync_files = os.listdir(self.disk_file._datadir)
|
|
|
|
self.assertEqual(len(datadir_files) + 1, len(rsync_files))
|
|
|
|
|
|
|
|
# and after we turn the crank ...
|
|
|
|
auditor_worker.object_audit(location)
|
|
|
|
|
|
|
|
# ... we've still got the rsync file
|
|
|
|
self.assertEqual(rsync_files, os.listdir(self.disk_file._datadir))
|
|
|
|
|
|
|
|
# and we'll keep it - depending on the rsync_tempfile_timeout
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 86400)
|
|
|
|
self.conf['rsync_tempfile_timeout'] = '3600'
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout, 3600)
|
|
|
|
now = time.time() + 1900
|
|
|
|
with mock.patch('swift.obj.auditor.time.time',
|
|
|
|
return_value=now):
|
|
|
|
auditor_worker.object_audit(location)
|
|
|
|
self.assertEqual(rsync_files, os.listdir(self.disk_file._datadir))
|
|
|
|
|
|
|
|
# but *tomorrow* when we run
|
|
|
|
tomorrow = time.time() + 86400
|
|
|
|
with mock.patch('swift.obj.auditor.time.time',
|
|
|
|
return_value=tomorrow):
|
|
|
|
auditor_worker.object_audit(location)
|
|
|
|
|
|
|
|
# ... we'll totally clean that stuff up!
|
|
|
|
self.assertEqual(datadir_files, os.listdir(self.disk_file._datadir))
|
|
|
|
|
|
|
|
# but if we have some random crazy file in there
|
|
|
|
random_crazy_file_path = os.path.join(self.disk_file._datadir,
|
|
|
|
'.random.crazy.file')
|
|
|
|
open(random_crazy_file_path, 'w')
|
|
|
|
|
|
|
|
tomorrow = time.time() + 86400
|
|
|
|
with mock.patch('swift.obj.auditor.time.time',
|
|
|
|
return_value=tomorrow):
|
|
|
|
auditor_worker.object_audit(location)
|
|
|
|
|
|
|
|
# that's someone elses problem
|
|
|
|
self.assertIn(os.path.basename(random_crazy_file_path),
|
|
|
|
os.listdir(self.disk_file._datadir))
|
|
|
|
|
2013-06-25 15:16:35 -04:00
|
|
|
def test_generic_exception_handling(self):
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2014-06-13 10:33:03 +00:00
|
|
|
# pretend that we logged (and reset counters) just now
|
|
|
|
auditor_worker.last_logged = time.time()
|
2013-06-25 15:16:35 -04:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_errors = auditor_worker.errors
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-06-25 15:16:35 -04:00
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2013-06-25 15:16:35 -04:00
|
|
|
}
|
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2015-03-17 08:32:57 +00:00
|
|
|
with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls',
|
|
|
|
lambda *_: 1 / 0):
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.audit_all_objects()
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.errors, pre_errors + 1)
|
2010-12-28 14:54:00 -08:00
|
|
|
|
2010-12-17 00:27:08 -08:00
|
|
|
def test_object_run_once_pass(self):
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.log_time = 0
|
2010-12-17 00:27:08 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2014-03-17 18:38:21 -07:00
|
|
|
|
|
|
|
def write_file(df):
|
|
|
|
with df.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
metadata = {
|
2020-09-11 16:28:11 -04:00
|
|
|
'ETag': md5(data, usedforsecurity=False).hexdigest(),
|
2014-03-17 18:38:21 -07:00
|
|
|
'X-Timestamp': timestamp,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2014-03-17 18:38:21 -07:00
|
|
|
|
|
|
|
# policy 0
|
|
|
|
write_file(self.disk_file)
|
|
|
|
# policy 1
|
|
|
|
write_file(self.disk_file_p1)
|
2016-01-12 14:18:30 -08:00
|
|
|
# policy 2
|
|
|
|
write_file(self.disk_file_ec)
|
2014-03-17 18:38:21 -07:00
|
|
|
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.audit_all_objects()
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines)
|
2014-03-17 18:38:21 -07:00
|
|
|
# 1 object per policy falls into 1024 bucket
|
2016-01-12 14:18:30 -08:00
|
|
|
self.assertEqual(auditor_worker.stats_buckets[1024], 3)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.stats_buckets[10240], 0)
|
2010-12-17 00:27:08 -08:00
|
|
|
|
2014-04-12 16:39:29 -07:00
|
|
|
# pick up some additional code coverage, large file
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024 * 1024
|
2016-01-12 14:18:30 -08:00
|
|
|
for df in (self.disk_file, self.disk_file_ec):
|
|
|
|
with df.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
metadata = {
|
2020-09-11 16:28:11 -04:00
|
|
|
'ETag': md5(data, usedforsecurity=False).hexdigest(),
|
2016-01-12 14:18:30 -08:00
|
|
|
'X-Timestamp': timestamp,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(Timestamp(timestamp))
|
2014-04-12 16:39:29 -07:00
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda', 'sdb'])
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines)
|
2014-03-17 18:38:21 -07:00
|
|
|
# still have the 1024 byte object left in policy-1 (plus the
|
2016-01-12 14:18:30 -08:00
|
|
|
# stats from the original 3)
|
|
|
|
self.assertEqual(auditor_worker.stats_buckets[1024], 4)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.stats_buckets[10240], 0)
|
2014-03-17 18:38:21 -07:00
|
|
|
# and then policy-0 disk_file was re-written as a larger object
|
2016-01-12 14:18:30 -08:00
|
|
|
self.assertEqual(auditor_worker.stats_buckets['OVER'], 2)
|
2014-04-12 16:39:29 -07:00
|
|
|
|
|
|
|
# pick up even more additional code coverage, misc paths
|
|
|
|
auditor_worker.log_time = -1
|
|
|
|
auditor_worker.stats_sizes = []
|
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda', 'sdb'])
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines)
|
2016-01-12 14:18:30 -08:00
|
|
|
self.assertEqual(auditor_worker.stats_buckets[1024], 4)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.stats_buckets[10240], 0)
|
2016-01-12 14:18:30 -08:00
|
|
|
self.assertEqual(auditor_worker.stats_buckets['OVER'], 2)
|
2014-04-12 16:39:29 -07:00
|
|
|
|
2014-03-26 16:32:07 +00:00
|
|
|
def test_object_run_logging(self):
|
2017-09-01 14:15:45 -07:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
2014-03-26 16:32:07 +00:00
|
|
|
self.rcache, self.devices)
|
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda'])
|
2017-09-01 14:15:45 -07:00
|
|
|
log_lines = self.logger.get_lines_for_level('info')
|
2016-12-08 15:44:48 +07:00
|
|
|
self.assertGreater(len(log_lines), 0)
|
2019-04-15 15:12:30 +08:00
|
|
|
self.assertIn('ALL - parallel, sda', log_lines[0])
|
2014-03-26 16:32:07 +00:00
|
|
|
|
2017-09-01 14:15:45 -07:00
|
|
|
self.logger.clear()
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
2014-03-26 16:32:07 +00:00
|
|
|
self.rcache, self.devices,
|
|
|
|
zero_byte_only_at_fps=50)
|
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda'])
|
2017-09-01 14:15:45 -07:00
|
|
|
log_lines = self.logger.get_lines_for_level('info')
|
2016-12-08 15:44:48 +07:00
|
|
|
self.assertGreater(len(log_lines), 0)
|
2019-04-15 15:12:30 +08:00
|
|
|
self.assertIn('ZBF - sda', log_lines[0])
|
2014-03-26 16:32:07 +00:00
|
|
|
|
2017-07-17 14:29:53 +01:00
|
|
|
def test_object_run_recon_cache(self):
|
|
|
|
ts = Timestamp(time.time())
|
2019-02-25 13:42:37 -08:00
|
|
|
data = b'test_data'
|
2017-07-17 14:29:53 +01:00
|
|
|
|
|
|
|
with self.disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
metadata = {
|
2020-09-11 16:28:11 -04:00
|
|
|
'ETag': md5(data, usedforsecurity=False).hexdigest(),
|
2017-07-17 14:29:53 +01:00
|
|
|
'X-Timestamp': ts.normal,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(ts)
|
|
|
|
|
|
|
|
# all devices
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
auditor_worker.audit_all_objects()
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
expected = {'object_auditor_stats_ALL':
|
|
|
|
{'passes': 1, 'errors': 0, 'audit_time': mock.ANY,
|
|
|
|
'start_time': mock.ANY, 'quarantined': 0,
|
|
|
|
'bytes_processed': 9}}
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices,
|
|
|
|
zero_byte_only_at_fps=50)
|
|
|
|
auditor_worker.audit_all_objects()
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
expected.update({
|
|
|
|
'object_auditor_stats_ZBF':
|
|
|
|
{'passes': 1, 'errors': 0, 'audit_time': mock.ANY,
|
|
|
|
'start_time': mock.ANY, 'quarantined': 0,
|
|
|
|
'bytes_processed': 0}})
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
|
|
|
|
# specific devices
|
|
|
|
os.unlink(self.rcache)
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda'])
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
expected = {'object_auditor_stats_ALL':
|
|
|
|
{'sda': {'passes': 1, 'errors': 0, 'audit_time': mock.ANY,
|
|
|
|
'start_time': mock.ANY, 'quarantined': 0,
|
|
|
|
'bytes_processed': 9}}}
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices,
|
|
|
|
zero_byte_only_at_fps=50)
|
|
|
|
auditor_worker.audit_all_objects(device_dirs=['sda'])
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
actual_rcache = json.load(fd)
|
|
|
|
expected.update({
|
|
|
|
'object_auditor_stats_ZBF':
|
|
|
|
{'sda': {'passes': 1, 'errors': 0, 'audit_time': mock.ANY,
|
|
|
|
'start_time': mock.ANY, 'quarantined': 0,
|
|
|
|
'bytes_processed': 0}}})
|
|
|
|
self.assertEqual(expected, actual_rcache)
|
|
|
|
|
2010-12-28 14:54:00 -08:00
|
|
|
def test_object_run_once_no_sda(self):
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2010-12-17 00:27:08 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
2014-06-13 10:33:03 +00:00
|
|
|
# pretend that we logged (and reset counters) just now
|
|
|
|
auditor_worker.last_logged = time.time()
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
2010-12-17 00:27:08 -08:00
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2010-12-17 00:27:08 -08:00
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-07-26 12:36:50 +02:00
|
|
|
os.write(writer._fd, b'extra_data')
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.audit_all_objects()
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1)
|
2010-12-17 00:27:08 -08:00
|
|
|
|
2010-12-28 14:54:00 -08:00
|
|
|
def test_object_run_once_multi_devices(self):
|
2014-02-24 11:24:56 +00:00
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
2014-06-13 10:33:03 +00:00
|
|
|
# pretend that we logged (and reset counters) just now
|
|
|
|
auditor_worker.last_logged = time.time()
|
2010-12-28 14:54:00 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2013-09-13 13:55:10 -06:00
|
|
|
pre_quarantines = auditor_worker.quarantines
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 10
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
2010-12-28 14:54:00 -08:00
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2010-12-28 14:54:00 -08:00
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.audit_all_objects()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.disk_file = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'ob',
|
|
|
|
policy=POLICIES.legacy)
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'1' * 10
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
2010-12-28 14:54:00 -08:00
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2010-12-28 14:54:00 -08:00
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2016-07-26 12:36:50 +02:00
|
|
|
os.write(writer._fd, b'extra_data')
|
2013-09-13 13:55:10 -06:00
|
|
|
auditor_worker.audit_all_objects()
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(auditor_worker.quarantines, pre_quarantines + 1)
|
2010-12-28 14:54:00 -08:00
|
|
|
|
2011-02-14 20:25:40 +00:00
|
|
|
def test_object_run_fast_track_non_zero(self):
|
|
|
|
self.auditor = auditor.ObjectAuditor(self.conf)
|
|
|
|
self.auditor.log_time = 0
|
2016-07-26 12:36:50 +02:00
|
|
|
data = b'0' * 1024
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.write(data)
|
2011-02-14 20:25:40 +00:00
|
|
|
etag.update(data)
|
|
|
|
etag = etag.hexdigest()
|
2016-01-12 14:18:30 -08:00
|
|
|
timestamp = str(normalize_timestamp(time.time()))
|
2011-02-14 20:25:40 +00:00
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
2016-01-12 14:18:30 -08:00
|
|
|
'X-Timestamp': timestamp,
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
2011-02-14 20:25:40 +00:00
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2016-07-26 12:36:50 +02:00
|
|
|
etag.update(b'1' + b'0' * 1023)
|
2011-02-14 20:25:40 +00:00
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata['ETag'] = etag
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
write_metadata(writer._fd, metadata)
|
2011-02-14 20:25:40 +00:00
|
|
|
|
|
|
|
quarantine_path = os.path.join(self.devices,
|
|
|
|
'sda', 'quarantined', 'objects')
|
2014-02-24 11:24:56 +00:00
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
kwargs['zero_byte_fps'] = 50
|
|
|
|
self.auditor.run_audit(**kwargs)
|
2011-02-14 20:25:40 +00:00
|
|
|
self.assertFalse(os.path.isdir(quarantine_path))
|
2014-02-24 11:24:56 +00:00
|
|
|
del(kwargs['zero_byte_fps'])
|
2017-07-13 15:44:56 +02:00
|
|
|
clear_auditor_status(self.devices, 'objects')
|
2014-02-24 11:24:56 +00:00
|
|
|
self.auditor.run_audit(**kwargs)
|
2011-02-14 20:25:40 +00:00
|
|
|
self.assertTrue(os.path.isdir(quarantine_path))
|
|
|
|
|
2015-06-08 19:40:56 +01:00
|
|
|
def setup_bad_zero_byte(self, timestamp=None):
|
|
|
|
if timestamp is None:
|
2017-04-27 14:19:00 -07:00
|
|
|
timestamp = Timestamp.now()
|
2011-02-14 20:25:40 +00:00
|
|
|
self.auditor = auditor.ObjectAuditor(self.conf)
|
|
|
|
self.auditor.log_time = 0
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2013-09-03 10:26:39 -04:00
|
|
|
with self.disk_file.create() as writer:
|
2011-02-14 20:25:40 +00:00
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag,
|
2015-06-08 19:40:56 +01:00
|
|
|
'X-Timestamp': timestamp.internal,
|
2011-02-14 20:25:40 +00:00
|
|
|
'Content-Length': 10,
|
|
|
|
}
|
2013-04-18 20:42:36 -04:00
|
|
|
writer.put(metadata)
|
2016-01-12 14:18:30 -08:00
|
|
|
writer.commit(Timestamp(timestamp))
|
2020-09-11 16:28:11 -04:00
|
|
|
etag = md5(usedforsecurity=False)
|
2011-02-14 20:25:40 +00:00
|
|
|
etag = etag.hexdigest()
|
|
|
|
metadata['ETag'] = etag
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
write_metadata(writer._fd, metadata)
|
2011-02-21 16:37:12 -08:00
|
|
|
|
|
|
|
def test_object_run_fast_track_all(self):
|
|
|
|
self.setup_bad_zero_byte()
|
2014-02-24 11:24:56 +00:00
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
self.auditor.run_audit(**kwargs)
|
2011-02-14 20:25:40 +00:00
|
|
|
quarantine_path = os.path.join(self.devices,
|
|
|
|
'sda', 'quarantined', 'objects')
|
|
|
|
self.assertTrue(os.path.isdir(quarantine_path))
|
|
|
|
|
2011-02-21 16:37:12 -08:00
|
|
|
def test_object_run_fast_track_zero(self):
|
|
|
|
self.setup_bad_zero_byte()
|
2014-02-24 11:24:56 +00:00
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
kwargs['zero_byte_fps'] = 50
|
2016-02-15 19:17:01 +00:00
|
|
|
|
|
|
|
called_args = [0]
|
|
|
|
|
|
|
|
def mock_get_auditor_status(path, logger, audit_type):
|
|
|
|
called_args[0] = audit_type
|
|
|
|
return get_auditor_status(path, logger, audit_type)
|
|
|
|
|
|
|
|
with mock.patch('swift.obj.diskfile.get_auditor_status',
|
|
|
|
mock_get_auditor_status):
|
2020-04-03 10:53:34 +02:00
|
|
|
self.auditor.run_audit(**kwargs)
|
2011-02-21 16:37:12 -08:00
|
|
|
quarantine_path = os.path.join(self.devices,
|
|
|
|
'sda', 'quarantined', 'objects')
|
|
|
|
self.assertTrue(os.path.isdir(quarantine_path))
|
2016-02-15 19:17:01 +00:00
|
|
|
self.assertEqual('ZBF', called_args[0])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2011-08-30 14:29:19 -07:00
|
|
|
def test_object_run_fast_track_zero_check_closed(self):
|
|
|
|
rat = [False]
|
|
|
|
|
|
|
|
class FakeFile(DiskFile):
|
|
|
|
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
def _quarantine(self, data_file, msg):
|
2011-08-30 14:29:19 -07:00
|
|
|
rat[0] = True
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
DiskFile._quarantine(self, data_file, msg)
|
|
|
|
|
2011-08-30 14:29:19 -07:00
|
|
|
self.setup_bad_zero_byte()
|
2015-03-17 08:32:57 +00:00
|
|
|
with mock.patch('swift.obj.diskfile.DiskFileManager.diskfile_cls',
|
|
|
|
FakeFile):
|
2014-02-24 11:24:56 +00:00
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
kwargs['zero_byte_fps'] = 50
|
|
|
|
self.auditor.run_audit(**kwargs)
|
2011-08-31 07:28:36 -07:00
|
|
|
quarantine_path = os.path.join(self.devices,
|
|
|
|
'sda', 'quarantined', 'objects')
|
|
|
|
self.assertTrue(os.path.isdir(quarantine_path))
|
|
|
|
self.assertTrue(rat[0])
|
2011-08-30 14:29:19 -07:00
|
|
|
|
2014-09-30 15:08:59 -05:00
|
|
|
@mock.patch.object(auditor.ObjectAuditor, 'run_audit')
|
|
|
|
@mock.patch('os.fork', return_value=0)
|
|
|
|
def test_with_inaccessible_object_location(self, mock_os_fork,
|
|
|
|
mock_run_audit):
|
|
|
|
# Need to ensure that any failures in run_audit do
|
|
|
|
# not prevent sys.exit() from running. Otherwise we get
|
|
|
|
# zombie processes.
|
|
|
|
e = OSError('permission denied')
|
|
|
|
mock_run_audit.side_effect = e
|
|
|
|
self.auditor = auditor.ObjectAuditor(self.conf)
|
|
|
|
self.assertRaises(SystemExit, self.auditor.fork_child, self)
|
|
|
|
|
2015-06-08 19:40:56 +01:00
|
|
|
def test_with_only_tombstone(self):
|
|
|
|
# sanity check that auditor doesn't touch solitary tombstones
|
|
|
|
ts_iter = make_timestamp_iter()
|
2015-10-08 15:38:36 +02:00
|
|
|
self.setup_bad_zero_byte(timestamp=next(ts_iter))
|
|
|
|
self.disk_file.delete(next(ts_iter))
|
2015-06-08 19:40:56 +01:00
|
|
|
files = os.listdir(self.disk_file._datadir)
|
|
|
|
self.assertEqual(1, len(files))
|
|
|
|
self.assertTrue(files[0].endswith('ts'))
|
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
self.auditor.run_audit(**kwargs)
|
|
|
|
files_after = os.listdir(self.disk_file._datadir)
|
|
|
|
self.assertEqual(files, files_after)
|
|
|
|
|
|
|
|
def test_with_tombstone_and_data(self):
|
|
|
|
# rsync replication could leave a tombstone and data file in object
|
|
|
|
# dir - verify they are both removed during audit
|
|
|
|
ts_iter = make_timestamp_iter()
|
2015-10-08 15:38:36 +02:00
|
|
|
ts_tomb = next(ts_iter)
|
|
|
|
ts_data = next(ts_iter)
|
2015-06-08 19:40:56 +01:00
|
|
|
self.setup_bad_zero_byte(timestamp=ts_data)
|
|
|
|
tomb_file_path = os.path.join(self.disk_file._datadir,
|
|
|
|
'%s.ts' % ts_tomb.internal)
|
|
|
|
with open(tomb_file_path, 'wb') as fd:
|
|
|
|
write_metadata(fd, {'X-Timestamp': ts_tomb.internal})
|
|
|
|
files = os.listdir(self.disk_file._datadir)
|
|
|
|
self.assertEqual(2, len(files))
|
|
|
|
self.assertTrue(os.path.basename(tomb_file_path) in files, files)
|
2014-02-24 11:24:56 +00:00
|
|
|
kwargs = {'mode': 'once'}
|
|
|
|
self.auditor.run_audit(**kwargs)
|
2015-06-08 19:40:56 +01:00
|
|
|
self.assertFalse(os.path.exists(self.disk_file._datadir))
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
|
2016-09-22 16:56:36 +01:00
|
|
|
def _audit_tombstone(self, conf, ts_tomb, zero_byte_fps=0):
|
|
|
|
self.auditor = auditor.ObjectAuditor(conf)
|
2016-07-25 20:10:44 +05:30
|
|
|
self.auditor.log_time = 0
|
2016-09-22 16:56:36 +01:00
|
|
|
# create tombstone and hashes.pkl file, ensuring the tombstone is not
|
|
|
|
# reclaimed by mocking time to be the tombstone time
|
|
|
|
with mock.patch('time.time', return_value=float(ts_tomb)):
|
2018-11-09 20:05:02 +08:00
|
|
|
# this delete will create an invalid hashes entry
|
2016-09-22 16:56:36 +01:00
|
|
|
self.disk_file.delete(ts_tomb)
|
2017-01-10 18:53:08 -08:00
|
|
|
# this get_hashes call will truncate the invalid hashes entry
|
2016-09-22 16:56:36 +01:00
|
|
|
self.disk_file.manager.get_hashes(
|
2017-04-19 15:09:40 +02:00
|
|
|
'sda', '0', [], self.disk_file.policy)
|
2016-09-22 16:56:36 +01:00
|
|
|
suffix = basename(dirname(self.disk_file._datadir))
|
|
|
|
part_dir = dirname(dirname(self.disk_file._datadir))
|
|
|
|
# sanity checks...
|
|
|
|
self.assertEqual(['%s.ts' % ts_tomb.internal],
|
|
|
|
os.listdir(self.disk_file._datadir))
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
2017-01-10 18:53:08 -08:00
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n'))
|
2016-09-22 16:56:36 +01:00
|
|
|
# Run auditor
|
|
|
|
self.auditor.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
|
|
|
|
# sanity check - auditor should not remove tombstone file
|
|
|
|
self.assertEqual(['%s.ts' % ts_tomb.internal],
|
|
|
|
os.listdir(self.disk_file._datadir))
|
|
|
|
return part_dir, suffix
|
|
|
|
|
|
|
|
def test_non_reclaimable_tombstone(self):
|
2016-07-25 20:10:44 +05:30
|
|
|
# audit with a recent tombstone
|
2016-09-22 16:56:36 +01:00
|
|
|
ts_tomb = Timestamp(time.time() - 55)
|
|
|
|
part_dir, suffix = self._audit_tombstone(self.conf, ts_tomb)
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
2017-01-10 18:53:08 -08:00
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n'))
|
2016-09-22 16:56:36 +01:00
|
|
|
|
|
|
|
def test_reclaimable_tombstone(self):
|
|
|
|
# audit with a reclaimable tombstone
|
|
|
|
ts_tomb = Timestamp(time.time() - 604800)
|
|
|
|
part_dir, suffix = self._audit_tombstone(self.conf, ts_tomb)
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
|
|
|
hash_val = fp.read()
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(suffix.encode('ascii'), hash_val.strip(b'\n'))
|
2016-09-22 16:56:36 +01:00
|
|
|
|
|
|
|
def test_non_reclaimable_tombstone_with_custom_reclaim_age(self):
|
|
|
|
# audit with a tombstone newer than custom reclaim age
|
|
|
|
ts_tomb = Timestamp(time.time() - 604800)
|
|
|
|
conf = dict(self.conf)
|
|
|
|
conf['reclaim_age'] = 2 * 604800
|
|
|
|
part_dir, suffix = self._audit_tombstone(conf, ts_tomb)
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
2017-01-10 18:53:08 -08:00
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n'))
|
2016-09-22 16:56:36 +01:00
|
|
|
|
|
|
|
def test_reclaimable_tombstone_with_custom_reclaim_age(self):
|
|
|
|
# audit with a tombstone older than custom reclaim age
|
|
|
|
ts_tomb = Timestamp(time.time() - 55)
|
|
|
|
conf = dict(self.conf)
|
|
|
|
conf['reclaim_age'] = 10
|
|
|
|
part_dir, suffix = self._audit_tombstone(conf, ts_tomb)
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
|
|
|
hash_val = fp.read()
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(suffix.encode('ascii'), hash_val.strip(b'\n'))
|
2016-09-22 16:56:36 +01:00
|
|
|
|
|
|
|
def test_reclaimable_tombstone_with_zero_byte_fps(self):
|
|
|
|
# audit with a tombstone older than reclaim age by a zero_byte_fps
|
|
|
|
# worker does not invalidate the hash
|
|
|
|
ts_tomb = Timestamp(time.time() - 604800)
|
|
|
|
part_dir, suffix = self._audit_tombstone(
|
|
|
|
self.conf, ts_tomb, zero_byte_fps=50)
|
|
|
|
self.assertTrue(os.path.exists(os.path.join(part_dir, HASH_FILE)))
|
2017-01-10 18:53:08 -08:00
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
self.assertTrue(os.path.exists(hash_invalid))
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n'))
|
2016-07-25 20:10:44 +05:30
|
|
|
|
2016-11-03 15:19:25 +00:00
|
|
|
def _test_expired_object_is_ignored(self, zero_byte_fps):
|
|
|
|
# verify that an expired object does not get mistaken for a tombstone
|
2017-09-01 14:15:45 -07:00
|
|
|
audit = auditor.ObjectAuditor(self.conf, logger=self.logger)
|
2016-11-03 15:19:25 +00:00
|
|
|
audit.log_time = 0
|
|
|
|
now = time.time()
|
|
|
|
write_diskfile(self.disk_file, Timestamp(now - 20),
|
|
|
|
extra_metadata={'X-Delete-At': now - 10})
|
|
|
|
files = os.listdir(self.disk_file._datadir)
|
|
|
|
self.assertTrue([f for f in files if f.endswith('.data')]) # sanity
|
2017-01-10 18:53:08 -08:00
|
|
|
# diskfile write appends to invalid hashes file
|
|
|
|
part_dir = dirname(dirname(self.disk_file._datadir))
|
|
|
|
hash_invalid = os.path.join(part_dir, HASH_INVALIDATIONS_FILE)
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(
|
|
|
|
basename(dirname(self.disk_file._datadir)).encode('ascii'),
|
|
|
|
fp.read().strip(b'\n')) # sanity check
|
2017-01-10 18:53:08 -08:00
|
|
|
|
|
|
|
# run the auditor...
|
2016-11-03 15:19:25 +00:00
|
|
|
with mock.patch.object(auditor, 'dump_recon_cache'):
|
|
|
|
audit.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
|
2017-01-10 18:53:08 -08:00
|
|
|
|
|
|
|
# the auditor doesn't touch anything on the invalidation file
|
|
|
|
# (i.e. not truncate and add no entry)
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(
|
|
|
|
basename(dirname(self.disk_file._datadir)).encode('ascii'),
|
|
|
|
fp.read().strip(b'\n')) # sanity check
|
2017-01-10 18:53:08 -08:00
|
|
|
|
|
|
|
# this get_hashes call will truncate the invalid hashes entry
|
|
|
|
self.disk_file.manager.get_hashes(
|
2017-04-19 15:09:40 +02:00
|
|
|
'sda', '0', [], self.disk_file.policy)
|
2017-01-10 18:53:08 -08:00
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n')) # sanity check
|
2017-01-10 18:53:08 -08:00
|
|
|
|
|
|
|
# run the auditor, again...
|
|
|
|
with mock.patch.object(auditor, 'dump_recon_cache'):
|
|
|
|
audit.run_audit(mode='once', zero_byte_fps=zero_byte_fps)
|
|
|
|
|
|
|
|
# verify nothing changed
|
2016-11-03 15:19:25 +00:00
|
|
|
self.assertTrue(os.path.exists(self.disk_file._datadir))
|
|
|
|
self.assertEqual(files, os.listdir(self.disk_file._datadir))
|
|
|
|
self.assertFalse(audit.logger.get_lines_for_level('error'))
|
|
|
|
self.assertFalse(audit.logger.get_lines_for_level('warning'))
|
2017-01-10 18:53:08 -08:00
|
|
|
# and there was no hash invalidation
|
|
|
|
with open(hash_invalid, 'rb') as fp:
|
2019-02-25 13:42:37 -08:00
|
|
|
self.assertEqual(b'', fp.read().strip(b'\n'))
|
2016-11-03 15:19:25 +00:00
|
|
|
|
|
|
|
def test_expired_object_is_ignored(self):
|
|
|
|
self._test_expired_object_is_ignored(0)
|
|
|
|
|
|
|
|
def test_expired_object_is_ignored_with_zero_byte_fps(self):
|
|
|
|
self._test_expired_object_is_ignored(50)
|
|
|
|
|
2016-07-25 20:10:44 +05:30
|
|
|
def test_auditor_reclaim_age(self):
|
2016-09-22 16:56:36 +01:00
|
|
|
# if we don't have access to the replicator config section we'll use
|
2016-07-25 20:10:44 +05:30
|
|
|
# diskfile's default
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 86400 * 7)
|
|
|
|
|
|
|
|
# if the reclaim_age option is set explicitly we use that
|
|
|
|
self.conf['reclaim_age'] = '1800'
|
|
|
|
auditor_worker = auditor.AuditorWorker(self.conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 1800)
|
|
|
|
|
|
|
|
# if we have a real config we can be a little smarter
|
|
|
|
config_path = os.path.join(self.testdir, 'objserver.conf')
|
|
|
|
|
|
|
|
# if there is no object-replicator section we still have to fall back
|
|
|
|
# to default because we can't parse the config for that section!
|
|
|
|
stub_config = """
|
|
|
|
[object-auditor]
|
|
|
|
"""
|
|
|
|
with open(config_path, 'w') as f:
|
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 86400 * 7)
|
|
|
|
|
|
|
|
# verify reclaim_age is of auditor config value
|
|
|
|
stub_config = """
|
|
|
|
[object-replicator]
|
|
|
|
[object-auditor]
|
|
|
|
reclaim_age = 60
|
|
|
|
"""
|
|
|
|
with open(config_path, 'w') as f:
|
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 60)
|
|
|
|
|
|
|
|
# verify reclaim_age falls back to replicator config value
|
|
|
|
# if there is no auditor config value
|
|
|
|
config_path = os.path.join(self.testdir, 'objserver.conf')
|
|
|
|
stub_config = """
|
|
|
|
[object-replicator]
|
|
|
|
reclaim_age = 60
|
|
|
|
[object-auditor]
|
|
|
|
"""
|
|
|
|
with open(config_path, 'w') as f:
|
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 60)
|
|
|
|
|
|
|
|
# we'll prefer our own DEFAULT section to the replicator though
|
|
|
|
self.assertEqual(auditor_worker.rsync_tempfile_timeout,
|
|
|
|
replicator.DEFAULT_RSYNC_TIMEOUT + 900)
|
|
|
|
stub_config = """
|
|
|
|
[DEFAULT]
|
|
|
|
reclaim_age = 1209600
|
|
|
|
[object-replicator]
|
|
|
|
reclaim_age = 1800
|
|
|
|
[object-auditor]
|
|
|
|
"""
|
|
|
|
with open(config_path, 'w') as f:
|
|
|
|
f.write(textwrap.dedent(stub_config))
|
|
|
|
conf = readconf(config_path, 'object-auditor')
|
|
|
|
auditor_worker = auditor.AuditorWorker(conf, self.logger,
|
|
|
|
self.rcache, self.devices)
|
|
|
|
router = auditor_worker.diskfile_router
|
|
|
|
for policy in POLICIES:
|
|
|
|
self.assertEqual(router[policy].reclaim_age, 1209600)
|
|
|
|
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
def test_sleeper(self):
|
2014-06-06 14:46:42 -07:00
|
|
|
with mock.patch(
|
|
|
|
'time.sleep', mock.MagicMock()) as mock_sleep:
|
|
|
|
my_auditor = auditor.ObjectAuditor(self.conf)
|
|
|
|
my_auditor._sleep()
|
2016-01-12 21:26:33 +01:00
|
|
|
mock_sleep.assert_called_with(30)
|
|
|
|
|
|
|
|
my_conf = dict(interval=2)
|
|
|
|
my_conf.update(self.conf)
|
|
|
|
my_auditor = auditor.ObjectAuditor(my_conf)
|
|
|
|
my_auditor._sleep()
|
|
|
|
mock_sleep.assert_called_with(2)
|
|
|
|
|
|
|
|
my_auditor = auditor.ObjectAuditor(self.conf)
|
|
|
|
my_auditor.interval = 2
|
|
|
|
my_auditor._sleep()
|
|
|
|
mock_sleep.assert_called_with(2)
|
DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
|
|
|
|
2014-03-26 16:32:07 +00:00
|
|
|
def test_run_parallel_audit(self):
|
2011-02-24 12:27:20 -08:00
|
|
|
|
|
|
|
class StopForever(Exception):
|
|
|
|
pass
|
|
|
|
|
2014-04-12 16:39:29 -07:00
|
|
|
class Bogus(Exception):
|
|
|
|
pass
|
|
|
|
|
2015-08-16 11:06:52 +02:00
|
|
|
loop_error = Bogus('exception')
|
|
|
|
|
2015-08-24 15:41:23 -07:00
|
|
|
class LetMeOut(BaseException):
|
|
|
|
pass
|
|
|
|
|
2011-02-24 12:27:20 -08:00
|
|
|
class ObjectAuditorMock(object):
|
|
|
|
check_args = ()
|
|
|
|
check_kwargs = {}
|
2014-02-24 11:24:56 +00:00
|
|
|
check_device_dir = None
|
2011-02-24 12:27:20 -08:00
|
|
|
fork_called = 0
|
2014-02-24 11:24:56 +00:00
|
|
|
master = 0
|
|
|
|
wait_called = 0
|
2011-02-24 12:27:20 -08:00
|
|
|
|
|
|
|
def mock_run(self, *args, **kwargs):
|
|
|
|
self.check_args = args
|
|
|
|
self.check_kwargs = kwargs
|
2014-02-24 11:24:56 +00:00
|
|
|
if 'zero_byte_fps' in kwargs:
|
|
|
|
self.check_device_dir = kwargs.get('device_dirs')
|
2011-02-24 12:27:20 -08:00
|
|
|
|
2014-04-29 15:04:42 +01:00
|
|
|
def mock_sleep_stop(self):
|
2011-02-24 12:27:20 -08:00
|
|
|
raise StopForever('stop')
|
|
|
|
|
2014-04-29 15:04:42 +01:00
|
|
|
def mock_sleep_continue(self):
|
|
|
|
return
|
|
|
|
|
2014-04-12 16:39:29 -07:00
|
|
|
def mock_audit_loop_error(self, parent, zbo_fps,
|
|
|
|
override_devices=None, **kwargs):
|
2015-08-16 11:06:52 +02:00
|
|
|
raise loop_error
|
2014-04-12 16:39:29 -07:00
|
|
|
|
2011-02-24 12:27:20 -08:00
|
|
|
def mock_fork(self):
|
|
|
|
self.fork_called += 1
|
2014-02-24 11:24:56 +00:00
|
|
|
if self.master:
|
|
|
|
return self.fork_called
|
|
|
|
else:
|
|
|
|
return 0
|
|
|
|
|
|
|
|
def mock_wait(self):
|
|
|
|
self.wait_called += 1
|
|
|
|
return (self.wait_called, 0)
|
|
|
|
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
def mock_signal(self, sig, action):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def mock_exit(self):
|
|
|
|
pass
|
|
|
|
|
2014-02-24 11:24:56 +00:00
|
|
|
for i in string.ascii_letters[2:26]:
|
|
|
|
mkdirs(os.path.join(self.devices, 'sd%s' % i))
|
2011-02-24 12:27:20 -08:00
|
|
|
|
|
|
|
my_auditor = auditor.ObjectAuditor(dict(devices=self.devices,
|
|
|
|
mount_check='false',
|
2014-03-26 16:32:07 +00:00
|
|
|
zero_byte_files_per_second=89,
|
|
|
|
concurrency=1))
|
|
|
|
|
2011-02-24 12:27:20 -08:00
|
|
|
mocker = ObjectAuditorMock()
|
2014-04-12 16:39:29 -07:00
|
|
|
my_auditor.logger.exception = mock.MagicMock()
|
|
|
|
real_audit_loop = my_auditor.audit_loop
|
|
|
|
my_auditor.audit_loop = mocker.mock_audit_loop_error
|
2014-02-24 11:24:56 +00:00
|
|
|
my_auditor.run_audit = mocker.mock_run
|
2011-02-24 12:27:20 -08:00
|
|
|
was_fork = os.fork
|
2014-02-24 11:24:56 +00:00
|
|
|
was_wait = os.wait
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
was_signal = signal.signal
|
|
|
|
was_exit = sys.exit
|
2014-04-12 16:39:29 -07:00
|
|
|
os.fork = mocker.mock_fork
|
|
|
|
os.wait = mocker.mock_wait
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
signal.signal = mocker.mock_signal
|
|
|
|
sys.exit = mocker.mock_exit
|
2011-02-24 12:27:20 -08:00
|
|
|
try:
|
2014-04-29 15:04:42 +01:00
|
|
|
my_auditor._sleep = mocker.mock_sleep_stop
|
2014-04-12 16:39:29 -07:00
|
|
|
my_auditor.run_once(zero_byte_fps=50)
|
|
|
|
my_auditor.logger.exception.assert_called_once_with(
|
2015-08-16 11:06:52 +02:00
|
|
|
'ERROR auditing: %s', loop_error)
|
2014-04-12 16:39:29 -07:00
|
|
|
my_auditor.logger.exception.reset_mock()
|
|
|
|
self.assertRaises(StopForever, my_auditor.run_forever)
|
|
|
|
my_auditor.logger.exception.assert_called_once_with(
|
2015-08-16 11:06:52 +02:00
|
|
|
'ERROR auditing: %s', loop_error)
|
2014-04-12 16:39:29 -07:00
|
|
|
my_auditor.audit_loop = real_audit_loop
|
|
|
|
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
# sleep between ZBF scanner forks
|
|
|
|
self.assertRaises(StopForever, my_auditor.fork_child, True, True)
|
|
|
|
|
|
|
|
mocker.fork_called = 0
|
|
|
|
signal.signal = was_signal
|
|
|
|
sys.exit = was_exit
|
2011-02-24 12:27:20 -08:00
|
|
|
self.assertRaises(StopForever,
|
|
|
|
my_auditor.run_forever, zero_byte_fps=50)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 50)
|
|
|
|
self.assertEqual(mocker.fork_called, 0)
|
2011-02-24 12:27:20 -08:00
|
|
|
|
2014-04-29 15:04:42 +01:00
|
|
|
self.assertRaises(SystemExit, my_auditor.run_once)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(mocker.fork_called, 1)
|
|
|
|
self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 89)
|
|
|
|
self.assertEqual(mocker.check_device_dir, [])
|
|
|
|
self.assertEqual(mocker.check_args, ())
|
2011-02-24 12:27:20 -08:00
|
|
|
|
2014-02-24 11:24:56 +00:00
|
|
|
device_list = ['sd%s' % i for i in string.ascii_letters[2:10]]
|
|
|
|
device_string = ','.join(device_list)
|
|
|
|
device_string_bogus = device_string + ',bogus'
|
|
|
|
|
|
|
|
mocker.fork_called = 0
|
|
|
|
self.assertRaises(SystemExit, my_auditor.run_once,
|
|
|
|
devices=device_string_bogus)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(mocker.fork_called, 1)
|
|
|
|
self.assertEqual(mocker.check_kwargs['zero_byte_fps'], 89)
|
|
|
|
self.assertEqual(sorted(mocker.check_device_dir), device_list)
|
2014-02-24 11:24:56 +00:00
|
|
|
|
|
|
|
mocker.master = 1
|
|
|
|
|
|
|
|
mocker.fork_called = 0
|
|
|
|
self.assertRaises(StopForever, my_auditor.run_forever)
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
# Fork or Wait are called greate than or equal to 2 times in the
|
|
|
|
# main process. 2 times if zbf run once and 3 times if zbf run
|
|
|
|
# again
|
|
|
|
self.assertGreaterEqual(mocker.fork_called, 2)
|
|
|
|
self.assertGreaterEqual(mocker.wait_called, 2)
|
2014-04-29 15:04:42 +01:00
|
|
|
|
|
|
|
my_auditor._sleep = mocker.mock_sleep_continue
|
2015-08-24 15:41:23 -07:00
|
|
|
my_auditor.audit_loop = works_only_once(my_auditor.audit_loop,
|
|
|
|
LetMeOut())
|
2014-04-29 15:04:42 +01:00
|
|
|
|
2014-03-26 16:32:07 +00:00
|
|
|
my_auditor.concurrency = 2
|
2014-04-29 15:04:42 +01:00
|
|
|
mocker.fork_called = 0
|
|
|
|
mocker.wait_called = 0
|
2015-08-24 15:41:23 -07:00
|
|
|
self.assertRaises(LetMeOut, my_auditor.run_forever)
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
# Fork or Wait are called greater than or equal to
|
|
|
|
# no. of devices + (no. of devices)/2 + 1 times in main process
|
2014-03-26 16:32:07 +00:00
|
|
|
no_devices = len(os.listdir(self.devices))
|
Solve the zombie process problem of Auditor
As the bug 1743310 reported,if we list the swift processes ,we will
see a zombie process every one minute.The zombie processes numbers may
be more than one.
The related code as follows: swift/obj/auditor.py:386~397
while pids:
pid = os.wait()[0]
# ZBF scanner must be restarted as soon as it finishes
# unless we're in run-once mode
if self.conf_zero_byte_fps and pid == zbf_pid and \
len(pids) > 1 and not once:
kwargs['device_dirs'] = override_devices
# sleep between ZBF scanner forks
self._sleep()
zbf_pid = self.fork_child(zero_byte_fps=True, **kwargs)
pids.add(zbf_pid)
pids.discard(pid)
The pids list includes a zbf pid and one or more pids with mode
(ALL - parallel, objectXXX).If the zbf process is the last one
finished,then the second zbf process will not be forked.Conversely,
the zbf process will be forked again,before the fork procedure,the main
process will sleep for self.interval seconds,30 seconds by default,
during the self.interval time,if a non zbf process finished,this process
will become a zombie,because the main process is in sleep.
In this solution,we move the sleep from whileloop to the subprocess,and
the main process will not be blocked in whileloop,so the subprocess
will be recovered in time.
Closes-Bug: #1743310
Change-Id: I61c766aa2a1c4bad0247a44a8e78ef38d9f3ae47
2018-01-15 15:16:57 +08:00
|
|
|
self.assertGreaterEqual(mocker.fork_called, no_devices +
|
|
|
|
no_devices / 2 + 1)
|
|
|
|
self.assertGreaterEqual(mocker.wait_called, no_devices +
|
|
|
|
no_devices / 2 + 1)
|
2011-02-24 12:27:20 -08:00
|
|
|
|
|
|
|
finally:
|
|
|
|
os.fork = was_fork
|
2014-02-24 11:24:56 +00:00
|
|
|
os.wait = was_wait
|
2011-02-24 12:27:20 -08:00
|
|
|
|
2015-08-24 15:41:23 -07:00
|
|
|
def test_run_audit_once(self):
|
|
|
|
my_auditor = auditor.ObjectAuditor(dict(devices=self.devices,
|
|
|
|
mount_check='false',
|
|
|
|
zero_byte_files_per_second=89,
|
|
|
|
concurrency=1))
|
|
|
|
|
|
|
|
forked_pids = []
|
|
|
|
next_zbf_pid = [2]
|
|
|
|
next_normal_pid = [1001]
|
|
|
|
outstanding_pids = [[]]
|
|
|
|
|
|
|
|
def fake_fork_child(**kwargs):
|
|
|
|
if len(forked_pids) > 10:
|
|
|
|
# something's gone horribly wrong
|
|
|
|
raise BaseException("forking too much")
|
|
|
|
|
|
|
|
# ZBF pids are all smaller than the normal-audit pids; this way
|
|
|
|
# we can return them first.
|
|
|
|
#
|
|
|
|
# Also, ZBF pids are even and normal-audit pids are odd; this is
|
|
|
|
# so humans seeing this test fail can better tell what's happening.
|
|
|
|
if kwargs.get('zero_byte_fps'):
|
|
|
|
pid = next_zbf_pid[0]
|
|
|
|
next_zbf_pid[0] += 2
|
|
|
|
else:
|
|
|
|
pid = next_normal_pid[0]
|
|
|
|
next_normal_pid[0] += 2
|
|
|
|
outstanding_pids[0].append(pid)
|
|
|
|
forked_pids.append(pid)
|
|
|
|
return pid
|
|
|
|
|
|
|
|
def fake_os_wait():
|
|
|
|
# Smallest pid first; that's ZBF if we have one, else normal
|
|
|
|
outstanding_pids[0].sort()
|
|
|
|
pid = outstanding_pids[0].pop(0)
|
|
|
|
return (pid, 0) # (pid, status)
|
|
|
|
|
|
|
|
with mock.patch("swift.obj.auditor.os.wait", fake_os_wait), \
|
|
|
|
mock.patch.object(my_auditor, 'fork_child', fake_fork_child), \
|
|
|
|
mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
my_auditor.run_once()
|
|
|
|
|
|
|
|
self.assertEqual(sorted(forked_pids), [2, 1001])
|
|
|
|
|
2017-07-17 14:29:53 +01:00
|
|
|
def test_run_audit_once_zbfps(self):
|
|
|
|
my_auditor = auditor.ObjectAuditor(dict(devices=self.devices,
|
|
|
|
mount_check='false',
|
|
|
|
zero_byte_files_per_second=89,
|
|
|
|
concurrency=1,
|
|
|
|
recon_cache_path=self.testdir))
|
|
|
|
|
|
|
|
with mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
my_auditor.run_once(zero_byte_fps=50)
|
|
|
|
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
# there's no objects to audit so expect no stats; this assertion
|
|
|
|
# may change if https://bugs.launchpad.net/swift/+bug/1704858 is
|
|
|
|
# fixed
|
|
|
|
self.assertEqual({}, json.load(fd))
|
|
|
|
|
|
|
|
# check recon cache stays clean after a second run
|
|
|
|
with mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
my_auditor.run_once(zero_byte_fps=50)
|
|
|
|
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
self.assertEqual({}, json.load(fd))
|
|
|
|
|
|
|
|
ts = Timestamp(time.time())
|
|
|
|
with self.disk_file.create() as writer:
|
|
|
|
metadata = {
|
2020-09-11 16:28:11 -04:00
|
|
|
'ETag': md5(b'', usedforsecurity=False).hexdigest(),
|
2017-07-17 14:29:53 +01:00
|
|
|
'X-Timestamp': ts.normal,
|
|
|
|
'Content-Length': str(os.fstat(writer._fd).st_size),
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(ts)
|
|
|
|
|
|
|
|
# check recon cache stays clean after a second run
|
|
|
|
with mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
my_auditor.run_once(zero_byte_fps=50)
|
|
|
|
with open(self.rcache) as fd:
|
|
|
|
self.assertEqual({
|
|
|
|
'object_auditor_stats_ZBF': {
|
|
|
|
'audit_time': 0,
|
|
|
|
'bytes_processed': 0,
|
|
|
|
'errors': 0,
|
|
|
|
'passes': 1,
|
|
|
|
'quarantined': 0,
|
|
|
|
'start_time': mock.ANY}},
|
|
|
|
json.load(fd))
|
|
|
|
|
2015-08-24 15:41:23 -07:00
|
|
|
def test_run_parallel_audit_once(self):
|
|
|
|
my_auditor = auditor.ObjectAuditor(
|
|
|
|
dict(devices=self.devices, mount_check='false',
|
|
|
|
zero_byte_files_per_second=89, concurrency=2))
|
|
|
|
|
|
|
|
# ZBF pids are smaller than the normal-audit pids; this way we can
|
|
|
|
# return them first from our mocked os.wait().
|
|
|
|
#
|
|
|
|
# Also, ZBF pids are even and normal-audit pids are odd; this is so
|
|
|
|
# humans seeing this test fail can better tell what's happening.
|
|
|
|
forked_pids = []
|
|
|
|
next_zbf_pid = [2]
|
|
|
|
next_normal_pid = [1001]
|
|
|
|
outstanding_pids = [[]]
|
|
|
|
|
|
|
|
def fake_fork_child(**kwargs):
|
|
|
|
if len(forked_pids) > 10:
|
|
|
|
# something's gone horribly wrong; try not to hang the test
|
|
|
|
# run because of it
|
|
|
|
raise BaseException("forking too much")
|
|
|
|
|
|
|
|
if kwargs.get('zero_byte_fps'):
|
|
|
|
pid = next_zbf_pid[0]
|
|
|
|
next_zbf_pid[0] += 2
|
|
|
|
else:
|
|
|
|
pid = next_normal_pid[0]
|
|
|
|
next_normal_pid[0] += 2
|
|
|
|
outstanding_pids[0].append(pid)
|
|
|
|
forked_pids.append(pid)
|
|
|
|
return pid
|
|
|
|
|
|
|
|
def fake_os_wait():
|
|
|
|
if not outstanding_pids[0]:
|
|
|
|
raise BaseException("nobody waiting")
|
|
|
|
|
|
|
|
# ZBF auditor finishes first
|
|
|
|
outstanding_pids[0].sort()
|
|
|
|
pid = outstanding_pids[0].pop(0)
|
|
|
|
return (pid, 0) # (pid, status)
|
|
|
|
|
|
|
|
# make sure we've got enough devs that the ZBF auditor can finish
|
|
|
|
# before all the normal auditors have been started
|
|
|
|
mkdirs(os.path.join(self.devices, 'sdc'))
|
|
|
|
mkdirs(os.path.join(self.devices, 'sdd'))
|
|
|
|
|
|
|
|
with mock.patch("swift.obj.auditor.os.wait", fake_os_wait), \
|
|
|
|
mock.patch.object(my_auditor, 'fork_child', fake_fork_child), \
|
|
|
|
mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
my_auditor.run_once()
|
|
|
|
|
|
|
|
self.assertEqual(sorted(forked_pids), [2, 1001, 1003, 1005, 1007])
|
|
|
|
|
2016-04-07 23:07:13 +03:00
|
|
|
def test_run_parallel_audit_once_failed_fork(self):
|
|
|
|
my_auditor = auditor.ObjectAuditor(
|
|
|
|
dict(devices=self.devices, mount_check='false',
|
|
|
|
concurrency=2))
|
|
|
|
|
|
|
|
start_pid = [1001]
|
|
|
|
outstanding_pids = []
|
|
|
|
failed_once = [False]
|
|
|
|
|
|
|
|
def failing_fork(**kwargs):
|
|
|
|
# this fork fails only on the 2nd call
|
|
|
|
# it's enough to cause the growth of orphaned child processes
|
|
|
|
if len(outstanding_pids) > 0 and not failed_once[0]:
|
|
|
|
failed_once[0] = True
|
|
|
|
raise OSError
|
|
|
|
start_pid[0] += 2
|
|
|
|
pid = start_pid[0]
|
|
|
|
outstanding_pids.append(pid)
|
|
|
|
return pid
|
|
|
|
|
|
|
|
def fake_wait():
|
|
|
|
return outstanding_pids.pop(0), 0
|
|
|
|
|
|
|
|
with mock.patch("swift.obj.auditor.os.wait", fake_wait), \
|
|
|
|
mock.patch.object(my_auditor, 'fork_child', failing_fork), \
|
|
|
|
mock.patch.object(my_auditor, '_sleep', lambda *a: None):
|
|
|
|
for i in range(3):
|
|
|
|
my_auditor.run_once()
|
|
|
|
|
|
|
|
self.assertEqual(len(outstanding_pids), 0,
|
|
|
|
"orphaned children left {0}, expected 0."
|
|
|
|
.format(outstanding_pids))
|
|
|
|
|
2015-08-24 15:41:23 -07:00
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
@patch_policies(_mocked_policies)
|
|
|
|
class TestAuditWatchers(TestAuditorBase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
super(TestAuditWatchers, self).setUp()
|
|
|
|
|
|
|
|
timestamp = Timestamp(time.time())
|
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
disk_file = self.df_mgr.get_diskfile(
|
|
|
|
'sda', '0', 'a', 'c', 'o0', policy=POLICIES.legacy)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
data = b'0' * 1024
|
|
|
|
etag = md5()
|
2021-04-27 22:15:56 -05:00
|
|
|
with disk_file.create() as writer:
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
metadata = {
|
2021-04-27 22:15:56 -05:00
|
|
|
'ETag': etag.hexdigest(),
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
'X-Timestamp': timestamp.internal,
|
|
|
|
'Content-Length': str(len(data)),
|
|
|
|
'X-Object-Meta-Flavor': 'banana',
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
2021-04-27 22:15:56 -05:00
|
|
|
# The commit does nothing; we keep it for code copy-paste with EC.
|
|
|
|
writer.commit(timestamp)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
disk_file = self.df_mgr.get_diskfile(
|
|
|
|
'sda', '0', 'a', 'c', 'o1', policy=POLICIES.legacy)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
data = b'1' * 2048
|
|
|
|
etag = md5()
|
2021-04-27 22:15:56 -05:00
|
|
|
with disk_file.create() as writer:
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
metadata = {
|
2021-04-27 22:15:56 -05:00
|
|
|
'ETag': etag.hexdigest(),
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
'X-Timestamp': timestamp.internal,
|
|
|
|
'Content-Length': str(len(data)),
|
|
|
|
'X-Object-Meta-Flavor': 'orange',
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
2021-04-27 22:15:56 -05:00
|
|
|
writer.commit(timestamp)
|
|
|
|
|
|
|
|
frag_0 = self.disk_file_ec.policy.pyeclib_driver.encode(
|
|
|
|
b'x' * self.disk_file_ec.policy.ec_segment_size)[0]
|
|
|
|
etag = md5()
|
|
|
|
with self.disk_file_ec.create() as writer:
|
|
|
|
writer.write(frag_0)
|
|
|
|
etag.update(frag_0)
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag.hexdigest(),
|
|
|
|
'X-Timestamp': timestamp.internal,
|
|
|
|
'Content-Length': str(len(frag_0)),
|
|
|
|
'X-Object-Meta-Flavor': 'peach',
|
2022-08-16 09:19:55 -07:00
|
|
|
'X-Object-Sysmeta-Ec-Frag-Index': '1',
|
|
|
|
'X-Object-Sysmeta-Ec-Etag': 'fake-etag',
|
2021-04-27 22:15:56 -05:00
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(timestamp)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
|
|
|
def test_watchers(self):
|
|
|
|
|
|
|
|
calls = []
|
|
|
|
|
|
|
|
class TestWatcher(object):
|
|
|
|
def __init__(self, conf, logger):
|
|
|
|
self._started = False
|
|
|
|
self._ended = False
|
|
|
|
calls.append(["__init__", conf, logger])
|
|
|
|
|
|
|
|
# Make sure the logger is capable of quacking like a logger
|
|
|
|
logger.debug("getting started")
|
|
|
|
|
|
|
|
def start(self, audit_type, **other_kwargs):
|
|
|
|
if self._started:
|
|
|
|
raise Exception("don't call it twice")
|
|
|
|
self._started = True
|
|
|
|
calls.append(['start', audit_type])
|
|
|
|
|
|
|
|
def see_object(self, object_metadata,
|
|
|
|
data_file_path, **other_kwargs):
|
|
|
|
calls.append(['see_object', object_metadata,
|
|
|
|
data_file_path, other_kwargs])
|
|
|
|
|
|
|
|
def end(self, **other_kwargs):
|
|
|
|
if self._ended:
|
|
|
|
raise Exception("don't call it twice")
|
|
|
|
self._ended = True
|
|
|
|
calls.append(['end'])
|
|
|
|
|
|
|
|
conf = self.conf.copy()
|
|
|
|
conf['watchers'] = 'test_watcher1'
|
|
|
|
conf['__file__'] = '/etc/swift/swift.conf'
|
|
|
|
ret_config = {'swift#dark_data': {'action': 'log'}}
|
|
|
|
with mock.patch('swift.obj.auditor.parse_prefixed_conf',
|
|
|
|
return_value=ret_config), \
|
|
|
|
mock.patch('swift.obj.auditor.load_pkg_resource',
|
|
|
|
side_effect=[TestWatcher]) as mock_load, \
|
|
|
|
mock.patch('swift.obj.auditor.get_logger',
|
|
|
|
lambda *a, **kw: self.logger):
|
|
|
|
my_auditor = auditor.ObjectAuditor(conf)
|
|
|
|
|
|
|
|
self.assertEqual(mock_load.mock_calls, [
|
|
|
|
mock.call('swift.object_audit_watcher', 'test_watcher1'),
|
|
|
|
])
|
|
|
|
|
|
|
|
my_auditor.run_audit(mode='once', zero_byte_fps=float("inf"))
|
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
self.assertEqual(len(calls), 6)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
|
|
|
self.assertEqual(calls[0], ["__init__", conf, mock.ANY])
|
|
|
|
self.assertIsInstance(calls[0][2], PrefixLoggerAdapter)
|
|
|
|
self.assertIs(calls[0][2].logger, self.logger)
|
|
|
|
|
|
|
|
self.assertEqual(calls[1], ["start", "ZBF"])
|
|
|
|
|
|
|
|
self.assertEqual(calls[2][0], "see_object")
|
|
|
|
self.assertEqual(calls[3][0], "see_object")
|
|
|
|
|
|
|
|
# The order in which the auditor finds things on the filesystem is
|
|
|
|
# irrelevant; what matters is that it finds all the things.
|
2021-04-27 22:15:56 -05:00
|
|
|
calls[2:5] = sorted(calls[2:5], key=lambda item: item[1]['name'])
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
2022-07-26 15:11:43 -07:00
|
|
|
self._assertDictContainsSubset({'name': '/a/c/o0',
|
|
|
|
'X-Object-Meta-Flavor': 'banana'},
|
|
|
|
calls[2][1])
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
self.assertIn('node/sda/objects/0/', calls[2][2]) # data_file_path
|
|
|
|
self.assertTrue(calls[2][2].endswith('.data')) # data_file_path
|
|
|
|
self.assertEqual({}, calls[2][3])
|
|
|
|
|
2022-07-26 15:11:43 -07:00
|
|
|
self._assertDictContainsSubset({'name': '/a/c/o1',
|
|
|
|
'X-Object-Meta-Flavor': 'orange'},
|
|
|
|
calls[3][1])
|
2021-04-27 22:15:56 -05:00
|
|
|
self.assertIn('node/sda/objects/0/', calls[3][2]) # data_file_path
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
self.assertTrue(calls[3][2].endswith('.data')) # data_file_path
|
|
|
|
self.assertEqual({}, calls[3][3])
|
|
|
|
|
2022-07-26 15:11:43 -07:00
|
|
|
self._assertDictContainsSubset({'name': '/a/c_ec/o',
|
|
|
|
'X-Object-Meta-Flavor': 'peach'},
|
|
|
|
calls[4][1])
|
2021-04-27 22:15:56 -05:00
|
|
|
self.assertIn('node/sda/objects-2/0/', calls[4][2]) # data_file_path
|
|
|
|
self.assertTrue(calls[4][2].endswith('.data')) # data_file_path
|
|
|
|
self.assertEqual({}, calls[4][3])
|
|
|
|
|
|
|
|
self.assertEqual(calls[5], ["end"])
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
|
|
|
log_lines = self.logger.get_lines_for_level('debug')
|
|
|
|
self.assertIn(
|
|
|
|
"[audit-watcher test_watcher1] getting started",
|
|
|
|
log_lines)
|
|
|
|
|
|
|
|
def test_builtin_watchers(self):
|
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
# Yep, back-channel signaling in tests.
|
|
|
|
sentinel = 'DARK'
|
|
|
|
|
|
|
|
timestamp = Timestamp(time.time())
|
|
|
|
|
|
|
|
disk_file = self.df_mgr.get_diskfile(
|
|
|
|
'sda', '0', 'a', sentinel, 'o2', policy=POLICIES.legacy)
|
|
|
|
data = b'2' * 1024
|
|
|
|
etag = md5()
|
|
|
|
with disk_file.create() as writer:
|
|
|
|
writer.write(data)
|
|
|
|
etag.update(data)
|
|
|
|
metadata = {
|
|
|
|
'ETag': etag.hexdigest(),
|
|
|
|
'X-Timestamp': timestamp.internal,
|
|
|
|
'Content-Length': str(len(data)),
|
|
|
|
'X-Object-Meta-Flavor': 'mango',
|
|
|
|
}
|
|
|
|
writer.put(metadata)
|
|
|
|
writer.commit(timestamp)
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
|
|
|
def fake_direct_get_container(node, part, account, container,
|
|
|
|
prefix=None, limit=None):
|
|
|
|
self.assertEqual(part, 1)
|
|
|
|
self.assertEqual(limit, 1)
|
2021-04-27 22:15:56 -05:00
|
|
|
|
|
|
|
if container == sentinel:
|
|
|
|
return {}, []
|
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
# The returned entry is not abbreviated, but is full of nonsese.
|
|
|
|
entry = {'bytes': 30968411,
|
|
|
|
'hash': '60303f4122966fe5925f045eb52d1129',
|
|
|
|
'name': '%s' % prefix,
|
|
|
|
'content_type': 'video/mp4',
|
|
|
|
'last_modified': '2017-08-15T03:30:57.693210'}
|
|
|
|
return {}, [entry]
|
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
conf = self.conf.copy()
|
|
|
|
conf['watchers'] = 'test_watcher1'
|
|
|
|
conf['__file__'] = '/etc/swift/swift.conf'
|
|
|
|
|
|
|
|
# with default watcher config the DARK object will not be older than
|
|
|
|
# grace_age so will not be logged
|
|
|
|
ret_config = {'test_watcher1': {'action': 'log'}}
|
|
|
|
with mock.patch('swift.obj.auditor.parse_prefixed_conf',
|
|
|
|
return_value=ret_config), \
|
|
|
|
mock.patch('swift.obj.auditor.load_pkg_resource',
|
|
|
|
side_effect=[DarkDataWatcher]):
|
|
|
|
my_auditor = auditor.ObjectAuditor(conf, logger=self.logger)
|
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
with mock.patch('swift.obj.watchers.dark_data.Ring', FakeRing1), \
|
|
|
|
mock.patch("swift.obj.watchers.dark_data.direct_get_container",
|
|
|
|
fake_direct_get_container):
|
|
|
|
my_auditor.run_audit(mode='once')
|
|
|
|
|
|
|
|
log_lines = self.logger.get_lines_for_level('info')
|
|
|
|
self.assertIn(
|
2021-04-27 22:15:56 -05:00
|
|
|
'[audit-watcher test_watcher1] total unknown 0 ok 4 dark 0',
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
log_lines)
|
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
self.logger.clear()
|
2021-05-21 21:06:49 -05:00
|
|
|
|
2021-04-27 22:15:56 -05:00
|
|
|
# with grace_age=0 the DARK object will be older than
|
|
|
|
# grace_age so will be logged
|
|
|
|
ret_config = {'test_watcher1': {'action': 'log', 'grace_age': '0'}}
|
|
|
|
with mock.patch('swift.obj.auditor.parse_prefixed_conf',
|
|
|
|
return_value=ret_config), \
|
|
|
|
mock.patch('swift.obj.auditor.load_pkg_resource',
|
|
|
|
side_effect=[DarkDataWatcher]):
|
|
|
|
my_auditor = auditor.ObjectAuditor(conf, logger=self.logger)
|
|
|
|
|
|
|
|
with mock.patch('swift.obj.watchers.dark_data.Ring', FakeRing1), \
|
|
|
|
mock.patch("swift.obj.watchers.dark_data.direct_get_container",
|
|
|
|
fake_direct_get_container):
|
|
|
|
my_auditor.run_audit(mode='once')
|
|
|
|
|
|
|
|
log_lines = self.logger.get_lines_for_level('info')
|
|
|
|
self.assertIn(
|
|
|
|
'[audit-watcher test_watcher1] total unknown 0 ok 3 dark 1',
|
|
|
|
log_lines)
|
|
|
|
|
|
|
|
def test_dark_data_watcher_init(self):
|
|
|
|
conf = {}
|
|
|
|
with mock.patch('swift.obj.watchers.dark_data.Ring', FakeRing1):
|
|
|
|
watcher = DarkDataWatcher(conf, self.logger)
|
|
|
|
self.assertEqual(self.logger, watcher.logger)
|
|
|
|
self.assertEqual(604800, watcher.grace_age)
|
|
|
|
self.assertEqual('log', watcher.dark_data_policy)
|
|
|
|
|
|
|
|
conf = {'grace_age': 360, 'action': 'delete'}
|
|
|
|
with mock.patch('swift.obj.watchers.dark_data.Ring', FakeRing1):
|
|
|
|
watcher = DarkDataWatcher(conf, self.logger)
|
|
|
|
self.assertEqual(self.logger, watcher.logger)
|
|
|
|
self.assertEqual(360, watcher.grace_age)
|
|
|
|
self.assertEqual('delete', watcher.dark_data_policy)
|
|
|
|
|
|
|
|
conf = {'grace_age': 0, 'action': 'invalid'}
|
|
|
|
with mock.patch('swift.obj.watchers.dark_data.Ring', FakeRing1):
|
|
|
|
watcher = DarkDataWatcher(conf, self.logger)
|
|
|
|
self.assertEqual(self.logger, watcher.logger)
|
|
|
|
self.assertEqual(0, watcher.grace_age)
|
|
|
|
self.assertEqual('log', watcher.dark_data_policy)
|
|
|
|
|
2021-05-21 21:06:49 -05:00
|
|
|
def test_dark_data_agreement(self):
|
|
|
|
|
|
|
|
# The dark data watcher only sees an object as dark if all container
|
|
|
|
# servers in the ring reply without an error and return an empty
|
|
|
|
# listing. So, we have the following permutations for an object:
|
|
|
|
#
|
|
|
|
# Container Servers Result
|
|
|
|
# CS1 CS2
|
|
|
|
# Listed Listed Good - the baseline result
|
|
|
|
# Listed Error Good
|
|
|
|
# Listed Not listed Good
|
|
|
|
# Error Error Unknown - the baseline failure
|
|
|
|
# Not listed Error Unknown
|
|
|
|
# Not listed Not listed Dark - the only such result!
|
|
|
|
#
|
|
|
|
scenario = [
|
|
|
|
{'cr': ['L', 'L'], 'res': 'G'},
|
|
|
|
{'cr': ['L', 'E'], 'res': 'G'},
|
|
|
|
{'cr': ['L', 'N'], 'res': 'G'},
|
|
|
|
{'cr': ['E', 'E'], 'res': 'U'},
|
|
|
|
{'cr': ['N', 'E'], 'res': 'U'},
|
|
|
|
{'cr': ['N', 'N'], 'res': 'D'}]
|
|
|
|
|
|
|
|
conf = self.conf.copy()
|
|
|
|
conf['watchers'] = 'test_watcher1'
|
|
|
|
conf['__file__'] = '/etc/swift/swift.conf'
|
|
|
|
ret_config = {'test_watcher1': {'action': 'log', 'grace_age': '0'}}
|
|
|
|
with mock.patch('swift.obj.auditor.parse_prefixed_conf',
|
|
|
|
return_value=ret_config), \
|
|
|
|
mock.patch('swift.obj.auditor.load_pkg_resource',
|
|
|
|
side_effect=[DarkDataWatcher]):
|
|
|
|
my_auditor = auditor.ObjectAuditor(conf, logger=self.logger)
|
|
|
|
|
|
|
|
for cur in scenario:
|
|
|
|
|
|
|
|
def fake_direct_get_container(node, part, account, container,
|
|
|
|
prefix=None, limit=None):
|
|
|
|
self.assertEqual(part, 1)
|
|
|
|
self.assertEqual(limit, 1)
|
|
|
|
|
|
|
|
reply_type = cur['cr'][int(node['id']) - 1]
|
|
|
|
|
|
|
|
if reply_type == 'E':
|
|
|
|
raise ClientException("Emulated container server error")
|
|
|
|
|
|
|
|
if reply_type == 'N':
|
|
|
|
return {}, []
|
|
|
|
|
|
|
|
entry = {'bytes': 30968411,
|
|
|
|
'hash': '60303f4122966fe5925f045eb52d1129',
|
|
|
|
'name': '%s' % prefix,
|
|
|
|
'content_type': 'video/mp4',
|
|
|
|
'last_modified': '2017-08-15T03:30:57.693210'}
|
|
|
|
return {}, [entry]
|
|
|
|
|
|
|
|
self.logger.clear()
|
|
|
|
|
|
|
|
namespace = 'swift.obj.watchers.dark_data.'
|
|
|
|
with mock.patch(namespace + 'Ring', FakeRing2), \
|
|
|
|
mock.patch(namespace + 'direct_get_container',
|
|
|
|
fake_direct_get_container):
|
|
|
|
my_auditor.run_audit(mode='once')
|
|
|
|
|
|
|
|
# We inherit a common setUp with 3 objects, so 3 everywhere.
|
|
|
|
if cur['res'] == 'U':
|
|
|
|
unk_exp, ok_exp, dark_exp = 3, 0, 0
|
|
|
|
elif cur['res'] == 'G':
|
|
|
|
unk_exp, ok_exp, dark_exp = 0, 3, 0
|
|
|
|
else:
|
|
|
|
unk_exp, ok_exp, dark_exp = 0, 0, 3
|
|
|
|
|
|
|
|
log_lines = self.logger.get_lines_for_level('info')
|
|
|
|
for line in log_lines:
|
|
|
|
|
|
|
|
if not line.startswith('[audit-watcher test_watcher1] total'):
|
|
|
|
continue
|
|
|
|
words = line.split()
|
|
|
|
if not (words[3] == 'unknown' and
|
|
|
|
words[5] == 'ok' and
|
|
|
|
words[7] == 'dark'):
|
|
|
|
unittest.TestCase.fail('Syntax error in %r' % (line,))
|
|
|
|
|
|
|
|
try:
|
|
|
|
unk_cnt = int(words[4])
|
|
|
|
ok_cnt = int(words[6])
|
|
|
|
dark_cnt = int(words[8])
|
|
|
|
except ValueError:
|
|
|
|
unittest.TestCase.fail('Bad value in %r' % (line,))
|
|
|
|
|
|
|
|
if unk_cnt != unk_exp or ok_cnt != ok_exp or dark_cnt != dark_exp:
|
|
|
|
fmt = 'Expected unknown %d ok %d dark %d, got %r, for nodes %r'
|
|
|
|
msg = fmt % (unk_exp, ok_exp, dark_exp,
|
|
|
|
' '.join(words[3:]), cur['cr'])
|
|
|
|
self.fail(msg=msg)
|
|
|
|
|
Let developers/operators add watchers to object audit
Swift operators may find it useful to operate on each object in their
cluster in some way. This commit provides them a way to hook into the
object auditor with a simple, clearly-defined boundary so that they
can iterate over their objects without additional disk IO.
For example, a cluster operator may want to ensure a semantic
consistency with all SLO segments accounted in their manifests,
or locate objects that aren't in container listings. Now that Swift
has encryption support, this could be used to locate unencrypted
objects. The list goes on.
This commit makes the auditor locate, via entry points, the watchers
named in its config file.
A watcher is a class with at least these four methods:
__init__(self, conf, logger, **kwargs)
start(self, audit_type, **kwargs)
see_object(self, object_metadata, data_file_path, **kwargs)
end(self, **kwargs)
The auditor will call watcher.start(audit_type) at the start of an
audit pass, watcher.see_object(...) for each object audited, and
watcher.end() at the end of an audit pass. All method arguments are
passed as keyword args.
This version of the API is implemented on the context of the
auditor itself, without spawning any additional processes.
If the plugins are not working well -- hang, crash, or leak --
it's easier to debug them when there's no additional complication
of processes that run by themselves.
In addition, we include a reference implementation of plugin for
the watcher API, as a help to plugin writers.
Change-Id: I1be1faec53b2cdfaabf927598f1460e23c206b0a
2015-08-13 17:05:25 -05:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
if __name__ == '__main__':
|
|
|
|
unittest.main()
|