swift/test/unit/obj/test_updater.py

492 lines
19 KiB
Python
Raw Normal View History

# Copyright (c) 2010-2012 OpenStack Foundation
2010-07-12 17:03:45 -05:00
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cPickle as pickle
import mock
2010-07-12 17:03:45 -05:00
import os
import unittest
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
import random
import itertools
from contextlib import closing
2010-07-12 17:03:45 -05:00
from gzip import GzipFile
from tempfile import mkdtemp
2010-07-12 17:03:45 -05:00
from shutil import rmtree
from test.unit import FakeLogger
2010-07-12 17:03:45 -05:00
from time import time
2011-04-20 19:54:28 +00:00
from distutils.dir_util import mkpath
2010-07-12 17:03:45 -05:00
from eventlet import spawn, Timeout, listen
from six.moves import range
2010-07-12 17:03:45 -05:00
DiskFile API, with reference implementation Refactor on-disk knowledge out of the object server by pushing the async update pickle creation to the new DiskFileManager class (name is not the best, so suggestions welcome), along with the REPLICATOR method logic. We also move the mount checking and thread pool storage to the new ondisk.Devices object, which then also becomes the new home of the audit_location_generator method. For the object server, a new setup() method is now called at the end of the controller's construction, and the _diskfile() method has been renamed to get_diskfile(), to allow implementation specific behavior. We then hide the need for the REST API layer to know how and where quarantining needs to be performed. There are now two places it is checked internally, on open() where we verify the content-length, name, and x-timestamp metadata, and in the reader on close where the etag metadata is checked if the entire file was read. We add a reader class to allow implementations to isolate the WSGI handling code for that specific environment (it is used no-where else in the REST APIs). This simplifies the caller's code to just use a "with" statement once open to avoid multiple points where close needs to be called. For a full historical comparison, including the usage patterns see: https://gist.github.com/portante/5488238 (as of master, 2b639f5, Merge "Fix 500 from account-quota This Commit middleware") --------------------------------+------------------------------------ DiskFileManager(conf) Methods: .pickle_async_update() .get_diskfile() .get_hashes() Attributes: .devices .logger .disk_chunk_size .keep_cache_size .bytes_per_sync DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o) Methods: Methods: *.__iter__() .close(verify_file=) .is_deleted() .is_expired() .quarantine() .get_data_file_size() .open() .read_metadata() .create() .create() .write_metadata() .delete() .delete() Attributes: Attributes: .quarantined_dir .keep_cache .metadata *DiskFileReader() Methods: .__iter__() .close() Attributes: +.was_quarantined DiskWriter() DiskFileWriter() Methods: Methods: .write() .write() .put() .put() * Note that the DiskFile class * Note that the DiskReader() object implements all the methods returned by the necessary for a WSGI app DiskFileOpened.reader() method iterator implements all the methods necessary for a WSGI app iterator + Note that if the auditor is refactored to not use the DiskFile class, see https://review.openstack.org/44787 then we don't need the was_quarantined attribute A reference "in-memory" object server implementation of a backend DiskFile class in swift/obj/mem_server.py and swift/obj/mem_diskfile.py. One can also reference https://github.com/portante/gluster-swift/commits/diskfile for the proposed integration with the gluster-swift code based on these changes. Change-Id: I44e153fdb405a5743e9c05349008f94136764916 Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-12 19:51:18 -04:00
from swift.obj import updater as object_updater
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
from swift.obj.diskfile import (ASYNCDIR_BASE, get_async_dir, DiskFileManager,
get_tmp_dir)
2010-07-12 17:03:45 -05:00
from swift.common.ring import RingData
from swift.common import utils
from swift.common.utils import hash_path, normalize_timestamp, mkdirs, \
write_pickle
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
from swift.common import swob
from test.unit import debug_logger, patch_policies, mocked_http_conn
from swift.common.storage_policy import StoragePolicy, POLICIES
2010-07-12 17:03:45 -05:00
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
_mocked_policies = [StoragePolicy(0, 'zero', False),
StoragePolicy(1, 'one', True)]
@patch_policies(_mocked_policies)
2010-07-12 17:03:45 -05:00
class TestObjectUpdater(unittest.TestCase):
def setUp(self):
utils.HASH_PATH_SUFFIX = 'endcap'
utils.HASH_PATH_PREFIX = ''
self.testdir = mkdtemp()
ring_file = os.path.join(self.testdir, 'container.ring.gz')
with closing(GzipFile(ring_file, 'wb')) as f:
pickle.dump(
RingData([[0, 1, 2, 0, 1, 2],
[1, 2, 0, 1, 2, 0],
[2, 3, 1, 2, 3, 1]],
[{'id': 0, 'ip': '127.0.0.1', 'port': 1,
'device': 'sda1', 'zone': 0},
{'id': 1, 'ip': '127.0.0.1', 'port': 1,
'device': 'sda1', 'zone': 2},
{'id': 2, 'ip': '127.0.0.1', 'port': 1,
'device': 'sda1', 'zone': 4}], 30),
f)
2010-07-12 17:03:45 -05:00
self.devices_dir = os.path.join(self.testdir, 'devices')
os.mkdir(self.devices_dir)
self.sda1 = os.path.join(self.devices_dir, 'sda1')
os.mkdir(self.sda1)
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
for policy in POLICIES:
os.mkdir(os.path.join(self.sda1, get_tmp_dir(policy)))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
self.logger = debug_logger()
2010-07-12 17:03:45 -05:00
def tearDown(self):
rmtree(self.testdir, ignore_errors=1)
def test_creation(self):
2010-08-20 00:42:38 +00:00
cu = object_updater.ObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '2',
'node_timeout': '5'})
2010-07-12 17:03:45 -05:00
self.assert_(hasattr(cu, 'logger'))
self.assert_(cu.logger is not None)
self.assertEquals(cu.devices, self.devices_dir)
self.assertEquals(cu.interval, 1)
self.assertEquals(cu.concurrency, 2)
self.assertEquals(cu.node_timeout, 5)
self.assert_(cu.get_container_ring() is not None)
@mock.patch('os.listdir')
def test_listdir_with_exception(self, mock_listdir):
e = OSError('permission_denied')
mock_listdir.side_effect = e
# setup updater
conf = {
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
}
daemon = object_updater.ObjectUpdater(conf)
daemon.logger = FakeLogger()
paths = daemon._listdir('foo/bar')
self.assertEqual([], paths)
log_lines = daemon.logger.get_lines_for_level('error')
msg = ('ERROR: Unable to access foo/bar: permission_denied')
self.assertEqual(log_lines[0], msg)
@mock.patch('os.listdir', return_value=['foo', 'bar'])
def test_listdir_without_exception(self, mock_listdir):
# setup updater
conf = {
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
}
daemon = object_updater.ObjectUpdater(conf)
daemon.logger = FakeLogger()
path = daemon._listdir('foo/bar/')
log_lines = daemon.logger.get_lines_for_level('error')
self.assertEqual(len(log_lines), 0)
self.assertEqual(path, ['foo', 'bar'])
2011-04-20 19:54:28 +00:00
def test_object_sweep(self):
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
def check_with_idx(index, warn, should_skip):
if int(index) > 0:
asyncdir = os.path.join(self.sda1,
ASYNCDIR_BASE + "-" + index)
else:
asyncdir = os.path.join(self.sda1, ASYNCDIR_BASE)
2011-04-20 19:54:28 +00:00
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
prefix_dir = os.path.join(asyncdir, 'abc')
mkpath(prefix_dir)
2011-04-20 19:54:28 +00:00
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
# A non-directory where directory is expected should just be
# skipped, but should not stop processing of subsequent
# directories.
not_dirs = (
os.path.join(self.sda1, 'not_a_dir'),
os.path.join(self.sda1,
ASYNCDIR_BASE + '-' + 'twentington'),
os.path.join(self.sda1,
ASYNCDIR_BASE + '-' + str(int(index) + 100)))
2011-04-20 19:54:28 +00:00
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
for not_dir in not_dirs:
with open(not_dir, 'w'):
pass
2011-04-20 19:54:28 +00:00
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
objects = {
'a': [1089.3, 18.37, 12.83, 1.3],
'b': [49.4, 49.3, 49.2, 49.1],
'c': [109984.123],
}
expected = set()
for o, timestamps in objects.items():
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
ohash = hash_path('account', 'container', o)
for t in timestamps:
o_path = os.path.join(prefix_dir, ohash + '-' +
normalize_timestamp(t))
if t == timestamps[0]:
expected.add((o_path, int(index)))
write_pickle({}, o_path)
seen = set()
class MockObjectUpdater(object_updater.ObjectUpdater):
def process_object_update(self, update_path, device, policy):
seen.add((update_path, int(policy)))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
os.unlink(update_path)
cu = MockObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '1',
'node_timeout': '5'})
cu.logger = mock_logger = mock.MagicMock()
cu.object_sweep(self.sda1)
self.assertEquals(mock_logger.warn.call_count, warn)
self.assert_(os.path.exists(os.path.join(self.sda1, 'not_a_dir')))
if should_skip:
# if we were supposed to skip over the dir, we didn't process
# anything at all
self.assertTrue(os.path.exists(prefix_dir))
self.assertEqual(set(), seen)
else:
self.assert_(not os.path.exists(prefix_dir))
self.assertEqual(expected, seen)
# test cleanup: the tempdir gets cleaned up between runs, but this
# way we can be called multiple times in a single test method
for not_dir in not_dirs:
os.unlink(not_dir)
# first check with valid policies
for pol in POLICIES:
check_with_idx(str(pol.idx), 0, should_skip=False)
# now check with a bogus async dir policy and make sure we get
# a warning indicating that the '99' policy isn't valid
check_with_idx('99', 1, should_skip=True)
2011-04-20 19:54:28 +00:00
@mock.patch.object(object_updater, 'ismount')
def test_run_once_with_disk_unmounted(self, mock_ismount):
mock_ismount.return_value = False
2010-08-20 00:42:38 +00:00
cu = object_updater.ObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '1',
'node_timeout': '15'})
2010-08-31 23:12:59 +00:00
cu.run_once()
async_dir = os.path.join(self.sda1, get_async_dir(POLICIES[0]))
2010-07-12 17:03:45 -05:00
os.mkdir(async_dir)
2010-08-31 23:12:59 +00:00
cu.run_once()
2010-07-12 17:03:45 -05:00
self.assert_(os.path.exists(async_dir))
# mount_check == False means no call to ismount
self.assertEqual([], mock_ismount.mock_calls)
2010-07-12 17:03:45 -05:00
cu = object_updater.ObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'TrUe',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '1',
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
'node_timeout': '15'}, logger=self.logger)
odd_dir = os.path.join(async_dir, 'not really supposed '
'to be here')
os.mkdir(odd_dir)
cu.run_once()
self.assert_(os.path.exists(async_dir))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
self.assert_(os.path.exists(odd_dir)) # skipped - not mounted!
# mount_check == True means ismount was checked
self.assertEqual([
mock.call(self.sda1),
], mock_ismount.mock_calls)
self.assertEqual(cu.logger.get_increment_counts(), {'errors': 1})
@mock.patch.object(object_updater, 'ismount')
def test_run_once(self, mock_ismount):
mock_ismount.return_value = True
cu = object_updater.ObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '1',
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
'node_timeout': '15'}, logger=self.logger)
cu.run_once()
async_dir = os.path.join(self.sda1, get_async_dir(POLICIES[0]))
os.mkdir(async_dir)
cu.run_once()
self.assert_(os.path.exists(async_dir))
# mount_check == False means no call to ismount
self.assertEqual([], mock_ismount.mock_calls)
cu = object_updater.ObjectUpdater({
'devices': self.devices_dir,
'mount_check': 'TrUe',
'swift_dir': self.testdir,
'interval': '1',
'concurrency': '1',
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
'node_timeout': '15'}, logger=self.logger)
odd_dir = os.path.join(async_dir, 'not really supposed '
'to be here')
2010-07-12 17:03:45 -05:00
os.mkdir(odd_dir)
2010-08-31 23:12:59 +00:00
cu.run_once()
2010-07-12 17:03:45 -05:00
self.assert_(os.path.exists(async_dir))
self.assert_(not os.path.exists(odd_dir))
# mount_check == True means ismount was checked
self.assertEqual([
mock.call(self.sda1),
], mock_ismount.mock_calls)
2010-07-12 17:03:45 -05:00
ohash = hash_path('a', 'c', 'o')
odir = os.path.join(async_dir, ohash[-3:])
mkdirs(odir)
older_op_path = os.path.join(
odir,
'%s-%s' % (ohash, normalize_timestamp(time() - 1)))
op_path = os.path.join(
odir,
2010-07-12 17:03:45 -05:00
'%s-%s' % (ohash, normalize_timestamp(time())))
for path in (op_path, older_op_path):
with open(path, 'wb') as async_pending:
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
pickle.dump({'op': 'PUT', 'account': 'a',
'container': 'c',
'obj': 'o', 'headers': {
'X-Container-Timestamp':
normalize_timestamp(0)}},
async_pending)
2010-08-31 23:12:59 +00:00
cu.run_once()
self.assert_(not os.path.exists(older_op_path))
2010-07-12 17:03:45 -05:00
self.assert_(os.path.exists(op_path))
self.assertEqual(cu.logger.get_increment_counts(),
{'failures': 1, 'unlinks': 1})
self.assertEqual(None,
pickle.load(open(op_path)).get('successes'))
2010-07-12 17:03:45 -05:00
bindsock = listen(('127.0.0.1', 0))
2011-04-20 19:54:28 +00:00
2010-08-26 09:03:08 -07:00
def accepter(sock, return_code):
2010-07-12 17:03:45 -05:00
try:
with Timeout(3):
inc = sock.makefile('rb')
out = sock.makefile('wb')
out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' %
return_code)
out.flush()
self.assertEquals(inc.readline(),
'PUT /sda1/0/a/c/o HTTP/1.1\r\n')
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
headers = swob.HeaderKeyDict()
2010-07-12 17:03:45 -05:00
line = inc.readline()
while line and line != '\r\n':
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
headers[line.split(':')[0]] = \
2010-07-12 17:03:45 -05:00
line.split(':')[1].strip()
line = inc.readline()
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
self.assertTrue('x-container-timestamp' in headers)
self.assertTrue('X-Backend-Storage-Policy-Index' in
headers)
except BaseException as err:
2010-07-12 17:03:45 -05:00
return err
return None
2011-04-20 19:54:28 +00:00
def accept(return_codes):
codes = iter(return_codes)
2010-08-26 09:03:08 -07:00
try:
events = []
for x in range(len(return_codes)):
2010-08-26 09:03:08 -07:00
with Timeout(3):
sock, addr = bindsock.accept()
events.append(
spawn(accepter, sock, next(codes)))
2010-08-26 09:03:08 -07:00
for event in events:
err = event.wait()
if err:
raise err
except BaseException as err:
2010-08-26 09:03:08 -07:00
return err
return None
event = spawn(accept, [201, 500, 500])
2010-07-12 17:03:45 -05:00
for dev in cu.get_container_ring().devs:
if dev is not None:
dev['port'] = bindsock.getsockname()[1]
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
cu.logger._clear()
2010-08-31 23:12:59 +00:00
cu.run_once()
2010-08-26 09:03:08 -07:00
err = event.wait()
if err:
raise err
self.assert_(os.path.exists(op_path))
self.assertEqual(cu.logger.get_increment_counts(),
{'failures': 1})
self.assertEqual([0],
pickle.load(open(op_path)).get('successes'))
event = spawn(accept, [404, 500])
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
cu.logger._clear()
cu.run_once()
err = event.wait()
if err:
raise err
self.assert_(os.path.exists(op_path))
self.assertEqual(cu.logger.get_increment_counts(),
{'failures': 1})
self.assertEqual([0, 1],
pickle.load(open(op_path)).get('successes'))
event = spawn(accept, [201])
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
cu.logger._clear()
cu.run_once()
err = event.wait()
2010-08-26 09:03:08 -07:00
if err:
raise err
2010-07-12 17:03:45 -05:00
self.assert_(not os.path.exists(op_path))
self.assertEqual(cu.logger.get_increment_counts(),
{'unlinks': 1, 'successes': 1})
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
def test_obj_put_legacy_updates(self):
ts = (normalize_timestamp(t) for t in
itertools.count(int(time())))
policy = POLICIES.get_by_index(0)
# setup updater
conf = {
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
}
async_dir = os.path.join(self.sda1, get_async_dir(policy))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
os.mkdir(async_dir)
account, container, obj = 'a', 'c', 'o'
# write an async
for op in ('PUT', 'DELETE'):
self.logger._clear()
daemon = object_updater.ObjectUpdater(conf, logger=self.logger)
dfmanager = DiskFileManager(conf, daemon.logger)
# don't include storage-policy-index in headers_out pickle
headers_out = swob.HeaderKeyDict({
'x-size': 0,
'x-content-type': 'text/plain',
'x-etag': 'd41d8cd98f00b204e9800998ecf8427e',
'x-timestamp': next(ts),
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
})
data = {'op': op, 'account': account, 'container': container,
'obj': obj, 'headers': headers_out}
dfmanager.pickle_async_update(self.sda1, account, container, obj,
data, next(ts), policy)
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
request_log = []
def capture(*args, **kwargs):
request_log.append((args, kwargs))
# run once
fake_status_codes = [200, 200, 200]
with mocked_http_conn(*fake_status_codes, give_connect=capture):
daemon.run_once()
self.assertEqual(len(fake_status_codes), len(request_log))
for request_args, request_kwargs in request_log:
ip, part, method, path, headers, qs, ssl = request_args
self.assertEqual(method, op)
self.assertEqual(headers['X-Backend-Storage-Policy-Index'],
str(int(policy)))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
self.assertEqual(daemon.logger.get_increment_counts(),
{'successes': 1, 'unlinks': 1,
'async_pendings': 1})
def test_obj_put_async_updates(self):
ts = (normalize_timestamp(t) for t in
itertools.count(int(time())))
policy = random.choice(list(POLICIES))
# setup updater
conf = {
'devices': self.devices_dir,
'mount_check': 'false',
'swift_dir': self.testdir,
}
daemon = object_updater.ObjectUpdater(conf, logger=self.logger)
async_dir = os.path.join(self.sda1, get_async_dir(policy))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
os.mkdir(async_dir)
# write an async
dfmanager = DiskFileManager(conf, daemon.logger)
account, container, obj = 'a', 'c', 'o'
op = 'PUT'
headers_out = swob.HeaderKeyDict({
'x-size': 0,
'x-content-type': 'text/plain',
'x-etag': 'd41d8cd98f00b204e9800998ecf8427e',
'x-timestamp': next(ts),
'X-Backend-Storage-Policy-Index': int(policy),
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
})
data = {'op': op, 'account': account, 'container': container,
'obj': obj, 'headers': headers_out}
dfmanager.pickle_async_update(self.sda1, account, container, obj,
data, next(ts), policy)
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
request_log = []
def capture(*args, **kwargs):
request_log.append((args, kwargs))
# run once
fake_status_codes = [
200, # object update success
200, # object update success
200, # object update conflict
]
with mocked_http_conn(*fake_status_codes, give_connect=capture):
daemon.run_once()
self.assertEqual(len(fake_status_codes), len(request_log))
for request_args, request_kwargs in request_log:
ip, part, method, path, headers, qs, ssl = request_args
self.assertEqual(method, 'PUT')
self.assertEqual(headers['X-Backend-Storage-Policy-Index'],
str(int(policy)))
Add Storage Policy support to Object Updates The object server will now send its storage policy index to the container server synchronously and asynchronously (via async_pending). Each storage policy gets its own async_pending directory under /srv/node/$disk/objects-$N, so there's no need to change the on-disk pickle format; the policy index comes from the async_pending's filename. This avoids any hassle on upgrade. (Recall that policy 0's objects live in /srv/node/$disk/objects, not objects-0.) Per-policy tempdir as well. Also clean up a couple little things in the object updater. Now it won't abort processing when it encounters a file (not directory) named "async_pending-\d+", and it won't process updates in a directory that does not correspond to a storage policy. That is, if you have policies 1, 2, and 3, but there's a directory on your disk named "async_pending-5", the updater will now skip over that entirely. It won't even bother doing directory listings at all. This is a good idea, believe it or not, because there's nothing good that the container server can do with an update from some unknown storage policy. It can't update the listing, it can't move the object if it's misplaced... all it can do is ignore the request, so it's better to just not send it in the first place. Plus, if this is due to a misconfiguration on one storage node, then the updates will get processed once the configuration is fixed. There's also a drive by fix to update some backend http mocks for container update tests that we're not fully exercising their their request fakes. Because the object server container update code is resilient to to all manor of failure from backend requests the general intent of the tests was unaffected but this change cleans up some confusing logging in the debug logger output. The object-server will send X-Storage-Policy-Index headers with all requests to container severs, including X-Delete containers and all object PUT/DELETE requests. This header value is persisted in the pickle file for the update and sent along with async requests from the object-updater as well. The container server will extract the X-Storage-Policy-Index header from incoming requests and apply it to container broker calls as appropriate defaulting to the legacy storage policy 0 to support seemless migration. DocImpact Implements: blueprint storage-policies Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd add to object updates Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
self.assertEqual(daemon.logger.get_increment_counts(),
{'successes': 1, 'unlinks': 1, 'async_pendings': 1})
2010-07-12 17:03:45 -05:00
if __name__ == '__main__':
unittest.main()