Decouple OpenStackCloud from Connection
Revert the openstacksdk subclassing from shade. The idea was to reduce the workload, but trying to make sure that the Cloud abstraction in openstacksdk doesn't break shade's contract while we update things is a ton of work to meet the contract that's not really valuable to people. Instead, we'll put shade on lifesupport and only accept bugfix patches. Revert "Make OpenStackCloud a subclass of Connection" This reverts commitab3f400064
. Revert "Use openstack.config directly for config" This reverts commit2b48637b67
. Revert "Remove the task manager" This reverts commit28e95889a0
. Change-Id: I3f5b5fb26af2f6c0bbaade24a04c3d1f274c8cce
This commit is contained in:
parent
6e733e77d5
commit
3b2cad5d31
10
README.rst
10
README.rst
@ -1,6 +1,14 @@
|
||||
Introduction
|
||||
============
|
||||
|
||||
.. warning::
|
||||
|
||||
shade has been superceded by `openstacksdk`_ and no longer takes new
|
||||
features. The existing code will continue to be maintained indefinitely
|
||||
for bugfixes as necessary, but improvements will be deferred to
|
||||
`openstacksdk`_. Please update your applications to use `openstacksdk`_
|
||||
directly.
|
||||
|
||||
shade is a simple client library for interacting with OpenStack clouds. The
|
||||
key word here is *simple*. Clouds can do many many many things - but there are
|
||||
probably only about 10 of them that most people care about with any
|
||||
@ -78,3 +86,5 @@ Links
|
||||
* `PyPI <https://pypi.org/project/shade/>`_
|
||||
* `Mailing list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra>`_
|
||||
* `Release notes <https://docs.openstack.org/releasenotes/shade>`_
|
||||
|
||||
.. _openstacksdk: https://docs.openstack.org/openstacksdk/latest/user/
|
||||
|
@ -67,13 +67,13 @@ Returned Resources
|
||||
==================
|
||||
|
||||
Complex objects returned to the caller must be a `munch.Munch` type. The
|
||||
`openstack._adapter.Adapter` class makes resources into `munch.Munch`.
|
||||
`shade._adapter.Adapter` class makes resources into `munch.Munch`.
|
||||
|
||||
All objects should be normalized. It is shade's purpose in life to make
|
||||
OpenStack consistent for end users, and this means not trusting the clouds
|
||||
to return consistent objects. There should be a normalize function in
|
||||
`sopenstack/cloud/_normalize.py` that is applied to objects before returning
|
||||
them to the user. See :doc:`../user/model` for further details on object model
|
||||
`shade/_normalize.py` that is applied to objects before returning them to
|
||||
the user. See :doc:`../user/model` for further details on object model
|
||||
requirements.
|
||||
|
||||
Fields should not be in the normalization contract if we cannot commit to
|
||||
|
@ -39,6 +39,13 @@ Most of the logging is set up to log to the root `shade` logger. There are
|
||||
additional sub-loggers that are used at times, primarily so that a user can
|
||||
decide to turn on or off a specific type of logging. They are listed below.
|
||||
|
||||
shade.task_manager
|
||||
`shade` uses a Task Manager to perform remote calls. The `shade.task_manager`
|
||||
logger emits messages at the start and end of each Task announging what
|
||||
it is going to run and then what it ran and how long it took. Logging
|
||||
`shade.task_manager` is a good way to get a trace of external actions shade
|
||||
is taking without full `HTTP Tracing`_.
|
||||
|
||||
shade.request_ids
|
||||
The `shade.request_ids` logger emits a log line at the end of each HTTP
|
||||
interaction with the OpenStack Request ID associated with the interaction.
|
||||
|
@ -1,6 +0,0 @@
|
||||
---
|
||||
upgrade:
|
||||
- |
|
||||
The ``manager`` parameter is no longer meaningful. This should have no
|
||||
impact as the only known consumer of the feature is nodepool which
|
||||
no longer uses shade.
|
@ -16,7 +16,7 @@ import logging
|
||||
import warnings
|
||||
|
||||
import keystoneauth1.exceptions
|
||||
from openstack.config import loader
|
||||
import os_client_config
|
||||
import pbr.version
|
||||
import requestsexceptions
|
||||
|
||||
@ -36,7 +36,11 @@ if requestsexceptions.SubjectAltNameWarning:
|
||||
|
||||
def _get_openstack_config(app_name=None, app_version=None):
|
||||
# Protect against older versions of os-client-config that don't expose this
|
||||
return loader.OpenStackConfig(app_name=app_name, app_version=app_version)
|
||||
try:
|
||||
return os_client_config.OpenStackConfig(
|
||||
app_name=app_name, app_version=app_version)
|
||||
except Exception:
|
||||
return os_client_config.OpenStackConfig()
|
||||
|
||||
|
||||
def simple_logging(debug=False, http_debug=False):
|
||||
|
164
shade/_adapter.py
Normal file
164
shade/_adapter.py
Normal file
@ -0,0 +1,164 @@
|
||||
# Copyright (c) 2016 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
''' Wrapper around keystoneauth Session to wrap calls in TaskManager '''
|
||||
|
||||
import functools
|
||||
from keystoneauth1 import adapter
|
||||
from six.moves import urllib
|
||||
|
||||
from shade import _log
|
||||
from shade import exc
|
||||
from shade import task_manager
|
||||
|
||||
|
||||
def extract_name(url):
|
||||
'''Produce a key name to use in logging/metrics from the URL path.
|
||||
|
||||
We want to be able to logic/metric sane general things, so we pull
|
||||
the url apart to generate names. The function returns a list because
|
||||
there are two different ways in which the elements want to be combined
|
||||
below (one for logging, one for statsd)
|
||||
|
||||
Some examples are likely useful:
|
||||
|
||||
/servers -> ['servers']
|
||||
/servers/{id} -> ['servers']
|
||||
/servers/{id}/os-security-groups -> ['servers', 'os-security-groups']
|
||||
/v2.0/networks.json -> ['networks']
|
||||
'''
|
||||
|
||||
url_path = urllib.parse.urlparse(url).path.strip()
|
||||
# Remove / from the beginning to keep the list indexes of interesting
|
||||
# things consistent
|
||||
if url_path.startswith('/'):
|
||||
url_path = url_path[1:]
|
||||
|
||||
# Special case for neutron, which puts .json on the end of urls
|
||||
if url_path.endswith('.json'):
|
||||
url_path = url_path[:-len('.json')]
|
||||
|
||||
url_parts = url_path.split('/')
|
||||
if url_parts[-1] == 'detail':
|
||||
# Special case detail calls
|
||||
# GET /servers/detail
|
||||
# returns ['servers', 'detail']
|
||||
name_parts = url_parts[-2:]
|
||||
else:
|
||||
# Strip leading version piece so that
|
||||
# GET /v2.0/networks
|
||||
# returns ['networks']
|
||||
if url_parts[0] in ('v1', 'v2', 'v2.0'):
|
||||
url_parts = url_parts[1:]
|
||||
name_parts = []
|
||||
# Pull out every other URL portion - so that
|
||||
# GET /servers/{id}/os-security-groups
|
||||
# returns ['servers', 'os-security-groups']
|
||||
for idx in range(0, len(url_parts)):
|
||||
if not idx % 2 and url_parts[idx]:
|
||||
name_parts.append(url_parts[idx])
|
||||
|
||||
# Keystone Token fetching is a special case, so we name it "tokens"
|
||||
if url_path.endswith('tokens'):
|
||||
name_parts = ['tokens']
|
||||
|
||||
# Getting the root of an endpoint is doing version discovery
|
||||
if not name_parts:
|
||||
name_parts = ['discovery']
|
||||
|
||||
# Strip out anything that's empty or None
|
||||
return [part for part in name_parts if part]
|
||||
|
||||
|
||||
class ShadeAdapter(adapter.Adapter):
|
||||
|
||||
def __init__(self, shade_logger, manager, *args, **kwargs):
|
||||
super(ShadeAdapter, self).__init__(*args, **kwargs)
|
||||
self.shade_logger = shade_logger
|
||||
self.manager = manager
|
||||
self.request_log = _log.setup_logging('shade.request_ids')
|
||||
|
||||
def _log_request_id(self, response, obj=None):
|
||||
# Log the request id and object id in a specific logger. This way
|
||||
# someone can turn it on if they're interested in this kind of tracing.
|
||||
request_id = response.headers.get('x-openstack-request-id')
|
||||
if not request_id:
|
||||
return response
|
||||
tmpl = "{meth} call to {service} for {url} used request id {req}"
|
||||
kwargs = dict(
|
||||
meth=response.request.method,
|
||||
service=self.service_type,
|
||||
url=response.request.url,
|
||||
req=request_id)
|
||||
|
||||
if isinstance(obj, dict):
|
||||
obj_id = obj.get('id', obj.get('uuid'))
|
||||
if obj_id:
|
||||
kwargs['obj_id'] = obj_id
|
||||
tmpl += " returning object {obj_id}"
|
||||
self.request_log.debug(tmpl.format(**kwargs))
|
||||
return response
|
||||
|
||||
def _munch_response(self, response, result_key=None, error_message=None):
|
||||
exc.raise_from_response(response, error_message=error_message)
|
||||
|
||||
if not response.content:
|
||||
# This doens't have any content
|
||||
return self._log_request_id(response)
|
||||
|
||||
# Some REST calls do not return json content. Don't decode it.
|
||||
if 'application/json' not in response.headers.get('Content-Type'):
|
||||
return self._log_request_id(response)
|
||||
|
||||
try:
|
||||
result_json = response.json()
|
||||
self._log_request_id(response, result_json)
|
||||
except Exception:
|
||||
return self._log_request_id(response)
|
||||
return result_json
|
||||
|
||||
def request(
|
||||
self, url, method, run_async=False, error_message=None,
|
||||
*args, **kwargs):
|
||||
name_parts = extract_name(url)
|
||||
name = '.'.join([self.service_type, method] + name_parts)
|
||||
class_name = "".join([
|
||||
part.lower().capitalize() for part in name.split('.')])
|
||||
|
||||
request_method = functools.partial(
|
||||
super(ShadeAdapter, self).request, url, method)
|
||||
|
||||
class RequestTask(task_manager.BaseTask):
|
||||
|
||||
def __init__(self, **kw):
|
||||
super(RequestTask, self).__init__(**kw)
|
||||
self.name = name
|
||||
self.__class__.__name__ = str(class_name)
|
||||
self.run_async = run_async
|
||||
|
||||
def main(self, client):
|
||||
self.args.setdefault('raise_exc', False)
|
||||
return request_method(**self.args)
|
||||
|
||||
response = self.manager.submit_task(RequestTask(**kwargs))
|
||||
if run_async:
|
||||
return response
|
||||
else:
|
||||
return self._munch_response(response, error_message=error_message)
|
||||
|
||||
def _version_matches(self, version):
|
||||
api_version = self.get_api_major_version()
|
||||
if api_version:
|
||||
return api_version[0] == version
|
||||
return False
|
@ -13,9 +13,9 @@ import importlib
|
||||
import warnings
|
||||
|
||||
from keystoneauth1 import plugin
|
||||
from openstack.cloud import _utils
|
||||
from os_client_config import constructors
|
||||
|
||||
from shade import _utils
|
||||
from shade import exc
|
||||
|
||||
|
||||
|
1109
shade/_normalize.py
Normal file
1109
shade/_normalize.py
Normal file
File diff suppressed because it is too large
Load Diff
759
shade/_utils.py
Normal file
759
shade/_utils.py
Normal file
@ -0,0 +1,759 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import contextlib
|
||||
import fnmatch
|
||||
import inspect
|
||||
import jmespath
|
||||
import munch
|
||||
import netifaces
|
||||
import re
|
||||
import six
|
||||
import sre_constants
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from decorator import decorator
|
||||
|
||||
from shade import _log
|
||||
from shade import exc
|
||||
from shade import meta
|
||||
|
||||
_decorated_methods = []
|
||||
|
||||
|
||||
def _exc_clear():
|
||||
"""Because sys.exc_clear is gone in py3 and is not in six."""
|
||||
if sys.version_info[0] == 2:
|
||||
sys.exc_clear()
|
||||
|
||||
|
||||
def _iterate_timeout(timeout, message, wait=2):
|
||||
"""Iterate and raise an exception on timeout.
|
||||
|
||||
This is a generator that will continually yield and sleep for
|
||||
wait seconds, and if the timeout is reached, will raise an exception
|
||||
with <message>.
|
||||
|
||||
"""
|
||||
log = _log.setup_logging('shade.iterate_timeout')
|
||||
|
||||
try:
|
||||
# None as a wait winds up flowing well in the per-resource cache
|
||||
# flow. We could spread this logic around to all of the calling
|
||||
# points, but just having this treat None as "I don't have a value"
|
||||
# seems friendlier
|
||||
if wait is None:
|
||||
wait = 2
|
||||
elif wait == 0:
|
||||
# wait should be < timeout, unless timeout is None
|
||||
wait = 0.1 if timeout is None else min(0.1, timeout)
|
||||
wait = float(wait)
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Wait value must be an int or float value. {wait} given"
|
||||
" instead".format(wait=wait))
|
||||
|
||||
start = time.time()
|
||||
count = 0
|
||||
while (timeout is None) or (time.time() < start + timeout):
|
||||
count += 1
|
||||
yield count
|
||||
log.debug('Waiting %s seconds', wait)
|
||||
time.sleep(wait)
|
||||
raise exc.OpenStackCloudTimeout(message)
|
||||
|
||||
|
||||
def _make_unicode(input):
|
||||
"""Turn an input into unicode unconditionally
|
||||
|
||||
:param input:
|
||||
A unicode, string or other object
|
||||
"""
|
||||
try:
|
||||
if isinstance(input, unicode):
|
||||
return input
|
||||
if isinstance(input, str):
|
||||
return input.decode('utf-8')
|
||||
else:
|
||||
# int, for example
|
||||
return unicode(input)
|
||||
except NameError:
|
||||
# python3!
|
||||
return str(input)
|
||||
|
||||
|
||||
def _dictify_resource(resource):
|
||||
if isinstance(resource, list):
|
||||
return [_dictify_resource(r) for r in resource]
|
||||
else:
|
||||
if hasattr(resource, 'toDict'):
|
||||
return resource.toDict()
|
||||
else:
|
||||
return resource
|
||||
|
||||
|
||||
def _filter_list(data, name_or_id, filters):
|
||||
"""Filter a list by name/ID and arbitrary meta data.
|
||||
|
||||
:param list data:
|
||||
The list of dictionary data to filter. It is expected that
|
||||
each dictionary contains an 'id' and 'name'
|
||||
key if a value for name_or_id is given.
|
||||
:param string name_or_id:
|
||||
The name or ID of the entity being filtered. Can be a glob pattern,
|
||||
such as 'nb01*'.
|
||||
:param filters:
|
||||
A dictionary of meta data to use for further filtering. Elements
|
||||
of this dictionary may, themselves, be dictionaries. Example::
|
||||
|
||||
{
|
||||
'last_name': 'Smith',
|
||||
'other': {
|
||||
'gender': 'Female'
|
||||
}
|
||||
}
|
||||
OR
|
||||
A string containing a jmespath expression for further filtering.
|
||||
"""
|
||||
# The logger is shade.fmmatch to allow a user/operator to configure logging
|
||||
# not to communicate about fnmatch misses (they shouldn't be too spammy,
|
||||
# but one never knows)
|
||||
log = _log.setup_logging('shade.fnmatch')
|
||||
if name_or_id:
|
||||
# name_or_id might already be unicode
|
||||
name_or_id = _make_unicode(name_or_id)
|
||||
identifier_matches = []
|
||||
bad_pattern = False
|
||||
try:
|
||||
fn_reg = re.compile(fnmatch.translate(name_or_id))
|
||||
except sre_constants.error:
|
||||
# If the fnmatch re doesn't compile, then we don't care,
|
||||
# but log it in case the user DID pass a pattern but did
|
||||
# it poorly and wants to know what went wrong with their
|
||||
# search
|
||||
fn_reg = None
|
||||
for e in data:
|
||||
e_id = _make_unicode(e.get('id', None))
|
||||
e_name = _make_unicode(e.get('name', None))
|
||||
|
||||
if ((e_id and e_id == name_or_id) or
|
||||
(e_name and e_name == name_or_id)):
|
||||
identifier_matches.append(e)
|
||||
else:
|
||||
# Only try fnmatch if we don't match exactly
|
||||
if not fn_reg:
|
||||
# If we don't have a pattern, skip this, but set the flag
|
||||
# so that we log the bad pattern
|
||||
bad_pattern = True
|
||||
continue
|
||||
if ((e_id and fn_reg.match(e_id)) or
|
||||
(e_name and fn_reg.match(e_name))):
|
||||
identifier_matches.append(e)
|
||||
if not identifier_matches and bad_pattern:
|
||||
log.debug("Bad pattern passed to fnmatch", exc_info=True)
|
||||
data = identifier_matches
|
||||
|
||||
if not filters:
|
||||
return data
|
||||
|
||||
if isinstance(filters, six.string_types):
|
||||
return jmespath.search(filters, data)
|
||||
|
||||
def _dict_filter(f, d):
|
||||
if not d:
|
||||
return False
|
||||
for key in f.keys():
|
||||
if isinstance(f[key], dict):
|
||||
if not _dict_filter(f[key], d.get(key, None)):
|
||||
return False
|
||||
elif d.get(key, None) != f[key]:
|
||||
return False
|
||||
return True
|
||||
|
||||
filtered = []
|
||||
for e in data:
|
||||
filtered.append(e)
|
||||
for key in filters.keys():
|
||||
if isinstance(filters[key], dict):
|
||||
if not _dict_filter(filters[key], e.get(key, None)):
|
||||
filtered.pop()
|
||||
break
|
||||
elif e.get(key, None) != filters[key]:
|
||||
filtered.pop()
|
||||
break
|
||||
return filtered
|
||||
|
||||
|
||||
def _get_entity(cloud, resource, name_or_id, filters, **kwargs):
|
||||
"""Return a single entity from the list returned by a given method.
|
||||
|
||||
:param object cloud:
|
||||
The controller class (Example: the main OpenStackCloud object) .
|
||||
:param string or callable resource:
|
||||
The string that identifies the resource to use to lookup the
|
||||
get_<>_by_id or search_<resource>s methods(Example: network)
|
||||
or a callable to invoke.
|
||||
:param string name_or_id:
|
||||
The name or ID of the entity being filtered or an object or dict.
|
||||
If this is an object/dict with an 'id' attr/key, we return it and
|
||||
bypass resource lookup.
|
||||
:param filters:
|
||||
A dictionary of meta data to use for further filtering.
|
||||
OR
|
||||
A string containing a jmespath expression for further filtering.
|
||||
Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]"
|
||||
"""
|
||||
|
||||
# Sometimes in the control flow of shade, we already have an object
|
||||
# fetched. Rather than then needing to pull the name or id out of that
|
||||
# object, pass it in here and rely on caching to prevent us from making
|
||||
# an additional call, it's simple enough to test to see if we got an
|
||||
# object and just short-circuit return it.
|
||||
|
||||
if (hasattr(name_or_id, 'id') or
|
||||
(isinstance(name_or_id, dict) and 'id' in name_or_id)):
|
||||
return name_or_id
|
||||
|
||||
# If a uuid is passed short-circuit it calling the
|
||||
# get_<resorce_name>_by_id method
|
||||
if getattr(cloud, 'use_direct_get', False) and _is_uuid_like(name_or_id):
|
||||
get_resource = getattr(cloud, 'get_%s_by_id' % resource, None)
|
||||
if get_resource:
|
||||
return get_resource(name_or_id)
|
||||
|
||||
search = resource if callable(resource) else getattr(
|
||||
cloud, 'search_%ss' % resource, None)
|
||||
if search:
|
||||
entities = search(name_or_id, filters, **kwargs)
|
||||
if entities:
|
||||
if len(entities) > 1:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Multiple matches found for %s" % name_or_id)
|
||||
return entities[0]
|
||||
return None
|
||||
|
||||
|
||||
def normalize_keystone_services(services):
|
||||
"""Normalize the structure of keystone services
|
||||
|
||||
In keystone v2, there is a field called "service_type". In v3, it's
|
||||
"type". Just make the returned dict have both.
|
||||
|
||||
:param list services: A list of keystone service dicts
|
||||
|
||||
:returns: A list of normalized dicts.
|
||||
"""
|
||||
ret = []
|
||||
for service in services:
|
||||
service_type = service.get('type', service.get('service_type'))
|
||||
new_service = {
|
||||
'id': service['id'],
|
||||
'name': service['name'],
|
||||
'description': service.get('description', None),
|
||||
'type': service_type,
|
||||
'service_type': service_type,
|
||||
'enabled': service['enabled']
|
||||
}
|
||||
ret.append(new_service)
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def localhost_supports_ipv6():
|
||||
"""Determine whether the local host supports IPv6
|
||||
|
||||
We look for a default route that supports the IPv6 address family,
|
||||
and assume that if it is present, this host has globally routable
|
||||
IPv6 connectivity.
|
||||
"""
|
||||
|
||||
try:
|
||||
return netifaces.AF_INET6 in netifaces.gateways()['default']
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
|
||||
def normalize_users(users):
|
||||
ret = [
|
||||
dict(
|
||||
id=user.get('id'),
|
||||
email=user.get('email'),
|
||||
name=user.get('name'),
|
||||
username=user.get('username'),
|
||||
default_project_id=user.get('default_project_id',
|
||||
user.get('tenantId')),
|
||||
domain_id=user.get('domain_id'),
|
||||
enabled=user.get('enabled'),
|
||||
description=user.get('description')
|
||||
) for user in users
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_domains(domains):
|
||||
ret = [
|
||||
dict(
|
||||
id=domain.get('id'),
|
||||
name=domain.get('name'),
|
||||
description=domain.get('description'),
|
||||
enabled=domain.get('enabled'),
|
||||
) for domain in domains
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_groups(domains):
|
||||
"""Normalize Identity groups."""
|
||||
ret = [
|
||||
dict(
|
||||
id=domain.get('id'),
|
||||
name=domain.get('name'),
|
||||
description=domain.get('description'),
|
||||
domain_id=domain.get('domain_id'),
|
||||
) for domain in domains
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_role_assignments(assignments):
|
||||
"""Put role_assignments into a form that works with search/get interface.
|
||||
|
||||
Role assignments have the structure::
|
||||
|
||||
[
|
||||
{
|
||||
"role": {
|
||||
"id": "--role-id--"
|
||||
},
|
||||
"scope": {
|
||||
"domain": {
|
||||
"id": "--domain-id--"
|
||||
}
|
||||
},
|
||||
"user": {
|
||||
"id": "--user-id--"
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
Which is hard to work with in the rest of our interface. Map this to be::
|
||||
|
||||
[
|
||||
{
|
||||
"id": "--role-id--",
|
||||
"domain": "--domain-id--",
|
||||
"user": "--user-id--",
|
||||
}
|
||||
]
|
||||
|
||||
Scope can be "domain" or "project" and "user" can also be "group".
|
||||
|
||||
:param list assignments: A list of dictionaries of role assignments.
|
||||
|
||||
:returns: A list of flattened/normalized role assignment dicts.
|
||||
"""
|
||||
new_assignments = []
|
||||
for assignment in assignments:
|
||||
new_val = munch.Munch({'id': assignment['role']['id']})
|
||||
for scope in ('project', 'domain'):
|
||||
if scope in assignment['scope']:
|
||||
new_val[scope] = assignment['scope'][scope]['id']
|
||||
for assignee in ('user', 'group'):
|
||||
if assignee in assignment:
|
||||
new_val[assignee] = assignment[assignee]['id']
|
||||
new_assignments.append(new_val)
|
||||
return new_assignments
|
||||
|
||||
|
||||
def normalize_flavor_accesses(flavor_accesses):
|
||||
"""Normalize Flavor access list."""
|
||||
return [munch.Munch(
|
||||
dict(
|
||||
flavor_id=acl.get('flavor_id'),
|
||||
project_id=acl.get('project_id') or acl.get('tenant_id'),
|
||||
)
|
||||
) for acl in flavor_accesses
|
||||
]
|
||||
|
||||
|
||||
def valid_kwargs(*valid_args):
|
||||
# This decorator checks if argument passed as **kwargs to a function are
|
||||
# present in valid_args.
|
||||
#
|
||||
# Typically, valid_kwargs is used when we want to distinguish between
|
||||
# None and omitted arguments and we still want to validate the argument
|
||||
# list.
|
||||
#
|
||||
# Example usage:
|
||||
#
|
||||
# @valid_kwargs('opt_arg1', 'opt_arg2')
|
||||
# def my_func(self, mandatory_arg1, mandatory_arg2, **kwargs):
|
||||
# ...
|
||||
#
|
||||
@decorator
|
||||
def func_wrapper(func, *args, **kwargs):
|
||||
argspec = inspect.getargspec(func)
|
||||
for k in kwargs:
|
||||
if k not in argspec.args[1:] and k not in valid_args:
|
||||
raise TypeError(
|
||||
"{f}() got an unexpected keyword argument "
|
||||
"'{arg}'".format(f=inspect.stack()[1][3], arg=k))
|
||||
return func(*args, **kwargs)
|
||||
return func_wrapper
|
||||
|
||||
|
||||
def cache_on_arguments(*cache_on_args, **cache_on_kwargs):
|
||||
_cache_name = cache_on_kwargs.pop('resource', None)
|
||||
|
||||
def _inner_cache_on_arguments(func):
|
||||
def _cache_decorator(obj, *args, **kwargs):
|
||||
the_method = obj._get_cache(_cache_name).cache_on_arguments(
|
||||
*cache_on_args, **cache_on_kwargs)(
|
||||
func.__get__(obj, type(obj)))
|
||||
return the_method(*args, **kwargs)
|
||||
|
||||
def invalidate(obj, *args, **kwargs):
|
||||
return obj._get_cache(
|
||||
_cache_name).cache_on_arguments()(func).invalidate(
|
||||
*args, **kwargs)
|
||||
|
||||
_cache_decorator.invalidate = invalidate
|
||||
_cache_decorator.func = func
|
||||
_decorated_methods.append(func.__name__)
|
||||
|
||||
return _cache_decorator
|
||||
return _inner_cache_on_arguments
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def shade_exceptions(error_message=None):
|
||||
"""Context manager for dealing with shade exceptions.
|
||||
|
||||
:param string error_message: String to use for the exception message
|
||||
content on non-OpenStackCloudExceptions.
|
||||
|
||||
Useful for avoiding wrapping shade OpenStackCloudException exceptions
|
||||
within themselves. Code called from within the context may throw such
|
||||
exceptions without having to catch and reraise them.
|
||||
|
||||
Non-OpenStackCloudException exceptions thrown within the context will
|
||||
be wrapped and the exception message will be appended to the given error
|
||||
message.
|
||||
"""
|
||||
try:
|
||||
yield
|
||||
except exc.OpenStackCloudException:
|
||||
raise
|
||||
except Exception as e:
|
||||
if error_message is None:
|
||||
error_message = str(e)
|
||||
raise exc.OpenStackCloudException(error_message)
|
||||
|
||||
|
||||
def safe_dict_min(key, data):
|
||||
"""Safely find the minimum for a given key in a list of dict objects.
|
||||
|
||||
This will find the minimum integer value for specific dictionary key
|
||||
across a list of dictionaries. The values for the given key MUST be
|
||||
integers, or string representations of an integer.
|
||||
|
||||
The dictionary key does not have to be present in all (or any)
|
||||
of the elements/dicts within the data set.
|
||||
|
||||
:param string key: The dictionary key to search for the minimum value.
|
||||
:param list data: List of dicts to use for the data set.
|
||||
|
||||
:returns: None if the field was not not found in any elements, or
|
||||
the minimum value for the field otherwise.
|
||||
"""
|
||||
min_value = None
|
||||
for d in data:
|
||||
if (key in d) and (d[key] is not None):
|
||||
try:
|
||||
val = int(d[key])
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Search for minimum value failed. "
|
||||
"Value for {key} is not an integer: {value}".format(
|
||||
key=key, value=d[key])
|
||||
)
|
||||
if (min_value is None) or (val < min_value):
|
||||
min_value = val
|
||||
return min_value
|
||||
|
||||
|
||||
def safe_dict_max(key, data):
|
||||
"""Safely find the maximum for a given key in a list of dict objects.
|
||||
|
||||
This will find the maximum integer value for specific dictionary key
|
||||
across a list of dictionaries. The values for the given key MUST be
|
||||
integers, or string representations of an integer.
|
||||
|
||||
The dictionary key does not have to be present in all (or any)
|
||||
of the elements/dicts within the data set.
|
||||
|
||||
:param string key: The dictionary key to search for the maximum value.
|
||||
:param list data: List of dicts to use for the data set.
|
||||
|
||||
:returns: None if the field was not not found in any elements, or
|
||||
the maximum value for the field otherwise.
|
||||
"""
|
||||
max_value = None
|
||||
for d in data:
|
||||
if (key in d) and (d[key] is not None):
|
||||
try:
|
||||
val = int(d[key])
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Search for maximum value failed. "
|
||||
"Value for {key} is not an integer: {value}".format(
|
||||
key=key, value=d[key])
|
||||
)
|
||||
if (max_value is None) or (val > max_value):
|
||||
max_value = val
|
||||
return max_value
|
||||
|
||||
|
||||
def _call_client_and_retry(client, url, retry_on=None,
|
||||
call_retries=3, retry_wait=2,
|
||||
**kwargs):
|
||||
"""Method to provide retry operations.
|
||||
|
||||
Some APIs utilize HTTP errors on certian operations to indicate that
|
||||
the resource is presently locked, and as such this mechanism provides
|
||||
the ability to retry upon known error codes.
|
||||
|
||||
:param object client: The client method, such as:
|
||||
``self.baremetal_client.post``
|
||||
:param string url: The URL to perform the operation upon.
|
||||
:param integer retry_on: A list of error codes that can be retried on.
|
||||
The method also supports a single integer to be
|
||||
defined.
|
||||
:param integer call_retries: The number of times to retry the call upon
|
||||
the error code defined by the 'retry_on'
|
||||
parameter. Default: 3
|
||||
:param integer retry_wait: The time in seconds to wait between retry
|
||||
attempts. Default: 2
|
||||
|
||||
:returns: The object returned by the client call.
|
||||
"""
|
||||
|
||||
# NOTE(TheJulia): This method, as of this note, does not have direct
|
||||
# unit tests, although is fairly well tested by the tests checking
|
||||
# retry logic in test_baremetal_node.py.
|
||||
log = _log.setup_logging('shade.http')
|
||||
|
||||
if isinstance(retry_on, int):
|
||||
retry_on = [retry_on]
|
||||
|
||||
count = 0
|
||||
while (count < call_retries):
|
||||
count += 1
|
||||
try:
|
||||
ret_val = client(url, **kwargs)
|
||||
except exc.OpenStackCloudHTTPError as e:
|
||||
if (retry_on is not None and
|
||||
e.response.status_code in retry_on):
|
||||
log.debug('Received retryable error {err}, waiting '
|
||||
'{wait} seconds to retry', {
|
||||
'err': e.response.status_code,
|
||||
'wait': retry_wait
|
||||
})
|
||||
time.sleep(retry_wait)
|
||||
continue
|
||||
else:
|
||||
raise
|
||||
# Break out of the loop, since the loop should only continue
|
||||
# when we encounter a known connection error.
|
||||
return ret_val
|
||||
|
||||
|
||||
def parse_range(value):
|
||||
"""Parse a numerical range string.
|
||||
|
||||
Breakdown a range expression into its operater and numerical parts.
|
||||
This expression must be a string. Valid values must be an integer string,
|
||||
optionally preceeded by one of the following operators::
|
||||
|
||||
- "<" : Less than
|
||||
- ">" : Greater than
|
||||
- "<=" : Less than or equal to
|
||||
- ">=" : Greater than or equal to
|
||||
|
||||
Some examples of valid values and function return values::
|
||||
|
||||
- "1024" : returns (None, 1024)
|
||||
- "<5" : returns ("<", 5)
|
||||
- ">=100" : returns (">=", 100)
|
||||
|
||||
:param string value: The range expression to be parsed.
|
||||
|
||||
:returns: A tuple with the operator string (or None if no operator
|
||||
was given) and the integer value. None is returned if parsing failed.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
|
||||
range_exp = re.match('(<|>|<=|>=){0,1}(\d+)$', value)
|
||||
if range_exp is None:
|
||||
return None
|
||||
|
||||
op = range_exp.group(1)
|
||||
num = int(range_exp.group(2))
|
||||
return (op, num)
|
||||
|
||||
|
||||
def range_filter(data, key, range_exp):
|
||||
"""Filter a list by a single range expression.
|
||||
|
||||
:param list data: List of dictionaries to be searched.
|
||||
:param string key: Key name to search within the data set.
|
||||
:param string range_exp: The expression describing the range of values.
|
||||
|
||||
:returns: A list subset of the original data set.
|
||||
:raises: OpenStackCloudException on invalid range expressions.
|
||||
"""
|
||||
filtered = []
|
||||
range_exp = str(range_exp).upper()
|
||||
|
||||
if range_exp == "MIN":
|
||||
key_min = safe_dict_min(key, data)
|
||||
if key_min is None:
|
||||
return []
|
||||
for d in data:
|
||||
if int(d[key]) == key_min:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
elif range_exp == "MAX":
|
||||
key_max = safe_dict_max(key, data)
|
||||
if key_max is None:
|
||||
return []
|
||||
for d in data:
|
||||
if int(d[key]) == key_max:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
|
||||
# Not looking for a min or max, so a range or exact value must
|
||||
# have been supplied.
|
||||
val_range = parse_range(range_exp)
|
||||
|
||||
# If parsing the range fails, it must be a bad value.
|
||||
if val_range is None:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Invalid range value: {value}".format(value=range_exp))
|
||||
|
||||
op = val_range[0]
|
||||
if op:
|
||||
# Range matching
|
||||
for d in data:
|
||||
d_val = int(d[key])
|
||||
if op == '<':
|
||||
if d_val < val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '>':
|
||||
if d_val > val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '<=':
|
||||
if d_val <= val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '>=':
|
||||
if d_val >= val_range[1]:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
else:
|
||||
# Exact number match
|
||||
for d in data:
|
||||
if int(d[key]) == val_range[1]:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
|
||||
|
||||
def generate_patches_from_kwargs(operation, **kwargs):
|
||||
"""Given a set of parameters, returns a list with the
|
||||
valid patch values.
|
||||
|
||||
:param string operation: The operation to perform.
|
||||
:param list kwargs: Dict of parameters.
|
||||
|
||||
:returns: A list with the right patch values.
|
||||
"""
|
||||
patches = []
|
||||
for k, v in kwargs.items():
|
||||
patch = {'op': operation,
|
||||
'value': v,
|
||||
'path': '/%s' % k}
|
||||
patches.append(patch)
|
||||
return sorted(patches)
|
||||
|
||||
|
||||
class FileSegment(object):
|
||||
"""File-like object to pass to requests."""
|
||||
|
||||
def __init__(self, filename, offset, length):
|
||||
self.filename = filename
|
||||
self.offset = offset
|
||||
self.length = length
|
||||
self.pos = 0
|
||||
self._file = open(filename, 'rb')
|
||||
self.seek(0)
|
||||
|
||||
def tell(self):
|
||||
return self._file.tell() - self.offset
|
||||
|
||||
def seek(self, offset, whence=0):
|
||||
if whence == 0:
|
||||
self._file.seek(self.offset + offset, whence)
|
||||
elif whence == 1:
|
||||
self._file.seek(offset, whence)
|
||||
elif whence == 2:
|
||||
self._file.seek(self.offset + self.length - offset, 0)
|
||||
|
||||
def read(self, size=-1):
|
||||
remaining = self.length - self.pos
|
||||
if remaining <= 0:
|
||||
return b''
|
||||
|
||||
to_read = remaining if size < 0 else min(size, remaining)
|
||||
chunk = self._file.read(to_read)
|
||||
self.pos += len(chunk)
|
||||
|
||||
return chunk
|
||||
|
||||
def reset(self):
|
||||
self._file.seek(self.offset, 0)
|
||||
|
||||
|
||||
def _format_uuid_string(string):
|
||||
return (string.replace('urn:', '')
|
||||
.replace('uuid:', '')
|
||||
.strip('{}')
|
||||
.replace('-', '')
|
||||
.lower())
|
||||
|
||||
|
||||
def _is_uuid_like(val):
|
||||
"""Returns validation of a value as a UUID.
|
||||
|
||||
:param val: Value to verify
|
||||
:type val: string
|
||||
:returns: bool
|
||||
|
||||
.. versionchanged:: 1.1.1
|
||||
Support non-lowercase UUIDs.
|
||||
"""
|
||||
try:
|
||||
return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val)
|
||||
except (TypeError, ValueError, AttributeError):
|
||||
return False
|
160
shade/exc.py
160
shade/exc.py
@ -12,4 +12,162 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from openstack.cloud.exc import * # noqa
|
||||
import sys
|
||||
import json
|
||||
|
||||
import munch
|
||||
from requests import exceptions as _rex
|
||||
|
||||
from shade import _log
|
||||
|
||||
|
||||
class OpenStackCloudException(Exception):
|
||||
|
||||
log_inner_exceptions = False
|
||||
|
||||
def __init__(self, message, extra_data=None, **kwargs):
|
||||
args = [message]
|
||||
if extra_data:
|
||||
if isinstance(extra_data, munch.Munch):
|
||||
extra_data = extra_data.toDict()
|
||||
args.append("Extra: {0}".format(str(extra_data)))
|
||||
super(OpenStackCloudException, self).__init__(*args, **kwargs)
|
||||
self.extra_data = extra_data
|
||||
# NOTE(mordred) The next two are not used for anything, but
|
||||
# they are public attributes so we keep them around.
|
||||
self.inner_exception = sys.exc_info()
|
||||
self.orig_message = message
|
||||
|
||||
def log_error(self, logger=None):
|
||||
# NOTE(mordred) This method is here for backwards compat. As shade
|
||||
# no longer wraps any exceptions, this doesn't do anything.
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudCreateException(OpenStackCloudException):
|
||||
|
||||
def __init__(self, resource, resource_id, extra_data=None, **kwargs):
|
||||
super(OpenStackCloudCreateException, self).__init__(
|
||||
message="Error creating {resource}: {resource_id}".format(
|
||||
resource=resource, resource_id=resource_id),
|
||||
extra_data=extra_data, **kwargs)
|
||||
self.resource_id = resource_id
|
||||
|
||||
|
||||
class OpenStackCloudTimeout(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudUnavailableExtension(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudUnavailableFeature(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudHTTPError(OpenStackCloudException, _rex.HTTPError):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
OpenStackCloudException.__init__(self, *args, **kwargs)
|
||||
_rex.HTTPError.__init__(self, *args, **kwargs)
|
||||
|
||||
|
||||
class OpenStackCloudBadRequest(OpenStackCloudHTTPError):
|
||||
"""There is something wrong with the request payload.
|
||||
|
||||
Possible reasons can include malformed json or invalid values to parameters
|
||||
such as flavorRef to a server create.
|
||||
"""
|
||||
|
||||
|
||||
class OpenStackCloudURINotFound(OpenStackCloudHTTPError):
|
||||
pass
|
||||
|
||||
# Backwards compat
|
||||
OpenStackCloudResourceNotFound = OpenStackCloudURINotFound
|
||||
|
||||
|
||||
def _log_response_extras(response):
|
||||
# Sometimes we get weird HTML errors. This is usually from load balancers
|
||||
# or other things. Log them to a special logger so that they can be
|
||||
# toggled indepdently - and at debug level so that a person logging
|
||||
# shade.* only gets them at debug.
|
||||
if response.headers.get('content-type') != 'text/html':
|
||||
return
|
||||
try:
|
||||
if int(response.headers.get('content-length', 0)) == 0:
|
||||
return
|
||||
except Exception:
|
||||
return
|
||||
logger = _log.setup_logging('shade.http')
|
||||
if response.reason:
|
||||
logger.debug(
|
||||
"Non-standard error '{reason}' returned from {url}:".format(
|
||||
reason=response.reason,
|
||||
url=response.url))
|
||||
else:
|
||||
logger.debug(
|
||||
"Non-standard error returned from {url}:".format(
|
||||
url=response.url))
|
||||
for response_line in response.text.split('\n'):
|
||||
logger.debug(response_line)
|
||||
|
||||
|
||||
# Logic shamelessly stolen from requests
|
||||
def raise_from_response(response, error_message=None):
|
||||
msg = ''
|
||||
if 400 <= response.status_code < 500:
|
||||
source = "Client"
|
||||
elif 500 <= response.status_code < 600:
|
||||
source = "Server"
|
||||
else:
|
||||
return
|
||||
|
||||
remote_error = "Error for url: {url}".format(url=response.url)
|
||||
try:
|
||||
details = response.json()
|
||||
# Nova returns documents that look like
|
||||
# {statusname: 'message': message, 'code': code}
|
||||
detail_keys = list(details.keys())
|
||||
if len(detail_keys) == 1:
|
||||
detail_key = detail_keys[0]
|
||||
detail_message = details[detail_key].get('message')
|
||||
if detail_message:
|
||||
remote_error += " {message}".format(message=detail_message)
|
||||
except ValueError:
|
||||
if response.reason:
|
||||
remote_error += " {reason}".format(reason=response.reason)
|
||||
except AttributeError:
|
||||
if response.reason:
|
||||
remote_error += " {reason}".format(reason=response.reason)
|
||||
try:
|
||||
json_resp = json.loads(details[detail_key])
|
||||
fault_string = json_resp.get('faultstring')
|
||||
if fault_string:
|
||||
remote_error += " {fault}".format(fault=fault_string)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
_log_response_extras(response)
|
||||
|
||||
if error_message:
|
||||
msg = '{error_message}. ({code}) {source} {remote_error}'.format(
|
||||
error_message=error_message,
|
||||
source=source,
|
||||
code=response.status_code,
|
||||
remote_error=remote_error)
|
||||
else:
|
||||
msg = '({code}) {source} {remote_error}'.format(
|
||||
code=response.status_code,
|
||||
source=source,
|
||||
remote_error=remote_error)
|
||||
|
||||
# Special case 404 since we raised a specific one for neutron exceptions
|
||||
# before
|
||||
if response.status_code == 404:
|
||||
raise OpenStackCloudURINotFound(msg, response=response)
|
||||
elif response.status_code == 400:
|
||||
raise OpenStackCloudBadRequest(msg, response=response)
|
||||
if msg:
|
||||
raise OpenStackCloudHTTPError(msg, response=response)
|
||||
|
@ -14,11 +14,10 @@
|
||||
|
||||
import functools
|
||||
|
||||
from openstack import exceptions
|
||||
from openstack.cloud import _utils
|
||||
from openstack.config import loader
|
||||
import os_client_config
|
||||
|
||||
import shade
|
||||
from shade import _utils
|
||||
|
||||
|
||||
class OpenStackInventory(object):
|
||||
@ -32,8 +31,8 @@ class OpenStackInventory(object):
|
||||
use_direct_get=False):
|
||||
if config_files is None:
|
||||
config_files = []
|
||||
config = loader.OpenStackConfig(
|
||||
config_files=loader.CONFIG_FILES + config_files)
|
||||
config = os_client_config.config.OpenStackConfig(
|
||||
config_files=os_client_config.config.CONFIG_FILES + config_files)
|
||||
self.extra_config = config.get_extra_config(
|
||||
config_key, config_defaults)
|
||||
|
||||
@ -48,7 +47,7 @@ class OpenStackInventory(object):
|
||||
shade.OpenStackCloud(
|
||||
cloud_config=config.get_one_cloud(cloud))
|
||||
]
|
||||
except exceptions.ConfigException as e:
|
||||
except os_client_config.exceptions.OpenStackConfigException as e:
|
||||
raise shade.OpenStackCloudException(e)
|
||||
|
||||
if private:
|
||||
|
@ -11,6 +11,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import base64
|
||||
import collections
|
||||
import copy
|
||||
import datetime
|
||||
import functools
|
||||
@ -20,28 +21,33 @@ import iso8601
|
||||
import json
|
||||
import jsonpatch
|
||||
import operator
|
||||
import os_client_config.defaults
|
||||
import six
|
||||
import threading
|
||||
import time
|
||||
import warnings
|
||||
|
||||
import dogpile.cache
|
||||
import munch
|
||||
import requestsexceptions
|
||||
from six.moves import urllib
|
||||
|
||||
import keystoneauth1.exceptions
|
||||
import keystoneauth1.session
|
||||
import os
|
||||
from openstack.cloud import _utils
|
||||
from openstack.config import loader
|
||||
from openstack import connection
|
||||
from openstack import utils
|
||||
import os_client_config
|
||||
|
||||
import shade
|
||||
from shade import _adapter
|
||||
from shade import exc
|
||||
from shade._heat import event_utils
|
||||
from shade._heat import template_utils
|
||||
from shade import _log
|
||||
from shade import _legacy_clients
|
||||
from shade import _normalize
|
||||
from shade import meta
|
||||
from shade import task_manager
|
||||
from shade import _utils
|
||||
|
||||
OBJECT_MD5_KEY = 'x-object-meta-x-shade-md5'
|
||||
OBJECT_SHA256_KEY = 'x-object-meta-x-shade-sha256'
|
||||
@ -93,7 +99,7 @@ def _no_pending_stacks(stacks):
|
||||
|
||||
|
||||
class OpenStackCloud(
|
||||
connection.Connection,
|
||||
_normalize.Normalizer,
|
||||
_legacy_clients.LegacyClientFactoryMixin):
|
||||
"""Represent a connection to an OpenStack Cloud.
|
||||
|
||||
@ -115,7 +121,7 @@ class OpenStackCloud(
|
||||
string. Optional, defaults to None.
|
||||
:param app_version: Version of the application to be appended to the
|
||||
user-agent string. Optional, defaults to None.
|
||||
:param CloudRegion cloud_config: Cloud config object from openstack.config
|
||||
:param CloudConfig cloud_config: Cloud config object from os-client-config
|
||||
In the future, this will be the only way
|
||||
to pass in cloud configuration, but is
|
||||
being phased in currently.
|
||||
@ -130,28 +136,167 @@ class OpenStackCloud(
|
||||
app_version=None,
|
||||
use_direct_get=False,
|
||||
**kwargs):
|
||||
super(OpenStackCloud, self).__init__(
|
||||
config=cloud_config,
|
||||
strict=strict,
|
||||
app_name=app_name,
|
||||
app_version=app_version,
|
||||
use_direct_get=use_direct_get,
|
||||
**kwargs)
|
||||
|
||||
# Logging in shade is based on 'shade' not 'openstack'
|
||||
self.log = _log.setup_logging('shade')
|
||||
|
||||
# shade has this as cloud_config, but sdk has config
|
||||
self.cloud_config = self.config
|
||||
if not cloud_config:
|
||||
config = os_client_config.OpenStackConfig(
|
||||
app_name=app_name, app_version=app_version)
|
||||
|
||||
# Backwards compat for get_extra behavior
|
||||
self._extra_config = self.config.get_client_config(
|
||||
cloud_config = config.get_one_cloud(**kwargs)
|
||||
|
||||
self.name = cloud_config.name
|
||||
self.auth = cloud_config.get_auth_args()
|
||||
self.region_name = cloud_config.region_name
|
||||
self.default_interface = cloud_config.get_interface()
|
||||
self.private = cloud_config.config.get('private', False)
|
||||
self.api_timeout = cloud_config.config['api_timeout']
|
||||
self.image_api_use_tasks = cloud_config.config['image_api_use_tasks']
|
||||
self.secgroup_source = cloud_config.config['secgroup_source']
|
||||
self.force_ipv4 = cloud_config.force_ipv4
|
||||
self.strict_mode = strict
|
||||
# TODO(mordred) When os-client-config adds a "get_client_settings()"
|
||||
# method to CloudConfig - remove this.
|
||||
self._extra_config = cloud_config._openstack_config.get_extra_config(
|
||||
'shade', {
|
||||
'get_flavor_extra_specs': True,
|
||||
})
|
||||
|
||||
# Place to store legacy client objects
|
||||
if manager is not None:
|
||||
self.manager = manager
|
||||
else:
|
||||
self.manager = task_manager.TaskManager(
|
||||
name=':'.join([self.name, self.region_name]), client=self)
|
||||
|
||||
self._external_ipv4_names = cloud_config.get_external_ipv4_networks()
|
||||
self._internal_ipv4_names = cloud_config.get_internal_ipv4_networks()
|
||||
self._external_ipv6_names = cloud_config.get_external_ipv6_networks()
|
||||
self._internal_ipv6_names = cloud_config.get_internal_ipv6_networks()
|
||||
self._nat_destination = cloud_config.get_nat_destination()
|
||||
self._default_network = cloud_config.get_default_network()
|
||||
|
||||
self._floating_ip_source = cloud_config.config.get(
|
||||
'floating_ip_source')
|
||||
if self._floating_ip_source:
|
||||
if self._floating_ip_source.lower() == 'none':
|
||||
self._floating_ip_source = None
|
||||
else:
|
||||
self._floating_ip_source = self._floating_ip_source.lower()
|
||||
|
||||
self._use_external_network = cloud_config.config.get(
|
||||
'use_external_network', True)
|
||||
self._use_internal_network = cloud_config.config.get(
|
||||
'use_internal_network', True)
|
||||
|
||||
# Work around older TaskManager objects that don't have submit_task
|
||||
if not hasattr(self.manager, 'submit_task'):
|
||||
self.manager.submit_task = self.manager.submitTask
|
||||
|
||||
(self.verify, self.cert) = cloud_config.get_requests_verify_args()
|
||||
# Turn off urllib3 warnings about insecure certs if we have
|
||||
# explicitly configured requests to tell it we do not want
|
||||
# cert verification
|
||||
if not self.verify:
|
||||
self.log.debug(
|
||||
"Turning off Insecure SSL warnings since verify=False")
|
||||
category = requestsexceptions.InsecureRequestWarning
|
||||
if category:
|
||||
# InsecureRequestWarning references a Warning class or is None
|
||||
warnings.filterwarnings('ignore', category=category)
|
||||
|
||||
self._disable_warnings = {}
|
||||
self.use_direct_get = use_direct_get
|
||||
|
||||
self._servers = None
|
||||
self._servers_time = 0
|
||||
self._servers_lock = threading.Lock()
|
||||
|
||||
self._ports = None
|
||||
self._ports_time = 0
|
||||
self._ports_lock = threading.Lock()
|
||||
|
||||
self._floating_ips = None
|
||||
self._floating_ips_time = 0
|
||||
self._floating_ips_lock = threading.Lock()
|
||||
|
||||
self._floating_network_by_router = None
|
||||
self._floating_network_by_router_run = False
|
||||
self._floating_network_by_router_lock = threading.Lock()
|
||||
|
||||
self._networks_lock = threading.Lock()
|
||||
self._reset_network_caches()
|
||||
|
||||
cache_expiration_time = int(cloud_config.get_cache_expiration_time())
|
||||
cache_class = cloud_config.get_cache_class()
|
||||
cache_arguments = cloud_config.get_cache_arguments()
|
||||
|
||||
self._resource_caches = {}
|
||||
|
||||
if cache_class != 'dogpile.cache.null':
|
||||
self.cache_enabled = True
|
||||
self._cache = self._make_cache(
|
||||
cache_class, cache_expiration_time, cache_arguments)
|
||||
expirations = cloud_config.get_cache_expiration()
|
||||
for expire_key in expirations.keys():
|
||||
# Only build caches for things we have list operations for
|
||||
if getattr(
|
||||
self, 'list_{0}'.format(expire_key), None):
|
||||
self._resource_caches[expire_key] = self._make_cache(
|
||||
cache_class, expirations[expire_key], cache_arguments)
|
||||
|
||||
self._SERVER_AGE = DEFAULT_SERVER_AGE
|
||||
self._PORT_AGE = DEFAULT_PORT_AGE
|
||||
self._FLOAT_AGE = DEFAULT_FLOAT_AGE
|
||||
else:
|
||||
self.cache_enabled = False
|
||||
|
||||
def _fake_invalidate(unused):
|
||||
pass
|
||||
|
||||
class _FakeCache(object):
|
||||
def invalidate(self):
|
||||
pass
|
||||
|
||||
# Don't cache list_servers if we're not caching things.
|
||||
# Replace this with a more specific cache configuration
|
||||
# soon.
|
||||
self._SERVER_AGE = 0
|
||||
self._PORT_AGE = 0
|
||||
self._FLOAT_AGE = 0
|
||||
self._cache = _FakeCache()
|
||||
# Undecorate cache decorated methods. Otherwise the call stacks
|
||||
# wind up being stupidly long and hard to debug
|
||||
for method in _utils._decorated_methods:
|
||||
meth_obj = getattr(self, method, None)
|
||||
if not meth_obj:
|
||||
continue
|
||||
if (hasattr(meth_obj, 'invalidate')
|
||||
and hasattr(meth_obj, 'func')):
|
||||
new_func = functools.partial(meth_obj.func, self)
|
||||
new_func.invalidate = _fake_invalidate
|
||||
setattr(self, method, new_func)
|
||||
|
||||
# If server expiration time is set explicitly, use that. Otherwise
|
||||
# fall back to whatever it was before
|
||||
self._SERVER_AGE = cloud_config.get_cache_resource_expiration(
|
||||
'server', self._SERVER_AGE)
|
||||
self._PORT_AGE = cloud_config.get_cache_resource_expiration(
|
||||
'port', self._PORT_AGE)
|
||||
self._FLOAT_AGE = cloud_config.get_cache_resource_expiration(
|
||||
'floating_ip', self._FLOAT_AGE)
|
||||
|
||||
self._container_cache = dict()
|
||||
self._file_hash_cache = dict()
|
||||
|
||||
self._keystone_session = None
|
||||
|
||||
self._legacy_clients = {}
|
||||
self._raw_clients = {}
|
||||
|
||||
self._local_ipv6 = (
|
||||
_utils.localhost_supports_ipv6() if not self.force_ipv4 else False)
|
||||
|
||||
self.cloud_config = cloud_config
|
||||
|
||||
def connect_as(self, **kwargs):
|
||||
"""Make a new OpenStackCloud object with new auth context.
|
||||
@ -179,7 +324,7 @@ class OpenStackCloud(
|
||||
if self.cloud_config._openstack_config:
|
||||
config = self.cloud_config._openstack_config
|
||||
else:
|
||||
config = loader.OpenStackConfig(
|
||||
config = os_client_config.OpenStackConfig(
|
||||
app_name=self.cloud_config._app_name,
|
||||
app_version=self.cloud_config._app_version,
|
||||
load_yaml_config=False)
|
||||
@ -322,6 +467,98 @@ class OpenStackCloud(
|
||||
return int(version[0])
|
||||
return version
|
||||
|
||||
def _get_versioned_client(
|
||||
self, service_type, min_version=None, max_version=None):
|
||||
config_version = self.cloud_config.get_api_version(service_type)
|
||||
config_major = self._get_major_version_id(config_version)
|
||||
max_major = self._get_major_version_id(max_version)
|
||||
min_major = self._get_major_version_id(min_version)
|
||||
# NOTE(mordred) The shade logic for versions is slightly different
|
||||
# than the ksa Adapter constructor logic. shade knows the versions
|
||||
# it knows, and uses them when it detects them. However, if a user
|
||||
# requests a version, and it's not found, and a different one shade
|
||||
# does know about it found, that's a warning in shade.
|
||||
if config_version:
|
||||
if min_major and config_major < min_major:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Version {config_version} requested for {service_type}"
|
||||
" but shade understands a minimum of {min_version}".format(
|
||||
config_version=config_version,
|
||||
service_type=service_type,
|
||||
min_version=min_version))
|
||||
elif max_major and config_major > max_major:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Version {config_version} requested for {service_type}"
|
||||
" but shade understands a maximum of {max_version}".format(
|
||||
config_version=config_version,
|
||||
service_type=service_type,
|
||||
max_version=max_version))
|
||||
request_min_version = config_version
|
||||
request_max_version = '{version}.latest'.format(
|
||||
version=config_major)
|
||||
adapter = _adapter.ShadeAdapter(
|
||||
session=self.keystone_session,
|
||||
manager=self.manager,
|
||||
service_type=self.cloud_config.get_service_type(service_type),
|
||||
service_name=self.cloud_config.get_service_name(service_type),
|
||||
interface=self.cloud_config.get_interface(service_type),
|
||||
endpoint_override=self.cloud_config.get_endpoint(service_type),
|
||||
region_name=self.cloud_config.region,
|
||||
min_version=request_min_version,
|
||||
max_version=request_max_version,
|
||||
shade_logger=self.log)
|
||||
if adapter.get_endpoint():
|
||||
return adapter
|
||||
|
||||
adapter = _adapter.ShadeAdapter(
|
||||
session=self.keystone_session,
|
||||
manager=self.manager,
|
||||
service_type=self.cloud_config.get_service_type(service_type),
|
||||
service_name=self.cloud_config.get_service_name(service_type),
|
||||
interface=self.cloud_config.get_interface(service_type),
|
||||
endpoint_override=self.cloud_config.get_endpoint(service_type),
|
||||
region_name=self.cloud_config.region,
|
||||
min_version=min_version,
|
||||
max_version=max_version,
|
||||
shade_logger=self.log)
|
||||
|
||||
# data.api_version can be None if no version was detected, such
|
||||
# as with neutron
|
||||
api_version = adapter.get_api_major_version(
|
||||
endpoint_override=self.cloud_config.get_endpoint(service_type))
|
||||
api_major = self._get_major_version_id(api_version)
|
||||
|
||||
# If we detect a different version that was configured, warn the user.
|
||||
# shade still knows what to do - but if the user gave us an explicit
|
||||
# version and we couldn't find it, they may want to investigate.
|
||||
if api_version and (api_major != config_major):
|
||||
warning_msg = (
|
||||
'{service_type} is configured for {config_version}'
|
||||
' but only {api_version} is available. shade is happy'
|
||||
' with this version, but if you were trying to force an'
|
||||
' override, that did not happen. You may want to check'
|
||||
' your cloud, or remove the version specification from'
|
||||
' your config.'.format(
|
||||
service_type=service_type,
|
||||
config_version=config_version,
|
||||
api_version='.'.join([str(f) for f in api_version])))
|
||||
self.log.debug(warning_msg)
|
||||
warnings.warn(warning_msg)
|
||||
return adapter
|
||||
|
||||
def _get_raw_client(
|
||||
self, service_type, api_version=None, endpoint_override=None):
|
||||
return _adapter.ShadeAdapter(
|
||||
session=self.keystone_session,
|
||||
manager=self.manager,
|
||||
service_type=self.cloud_config.get_service_type(service_type),
|
||||
service_name=self.cloud_config.get_service_name(service_type),
|
||||
interface=self.cloud_config.get_interface(service_type),
|
||||
endpoint_override=self.cloud_config.get_endpoint(
|
||||
service_type) or endpoint_override,
|
||||
region_name=self.cloud_config.region,
|
||||
shade_logger=self.log)
|
||||
|
||||
def _is_client_version(self, client, version):
|
||||
client_name = '_{client}_client'.format(client=client)
|
||||
client = getattr(self, client_name)
|
||||
@ -451,7 +688,16 @@ class OpenStackCloud(
|
||||
|
||||
@property
|
||||
def keystone_session(self):
|
||||
return self.session
|
||||
if self._keystone_session is None:
|
||||
try:
|
||||
self._keystone_session = self.cloud_config.get_session()
|
||||
if hasattr(self._keystone_session, 'additional_user_agent'):
|
||||
self._keystone_session.additional_user_agent.append(
|
||||
('shade', shade.__version__))
|
||||
except Exception as e:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Error authenticating to keystone: %s " % str(e))
|
||||
return self._keystone_session
|
||||
|
||||
@property
|
||||
def _keystone_catalog(self):
|
||||
@ -530,7 +776,7 @@ class OpenStackCloud(
|
||||
def _get_current_location(self, project_id=None, zone=None):
|
||||
return munch.Munch(
|
||||
cloud=self.name,
|
||||
region_name=self.config.region_name,
|
||||
region_name=self.region_name,
|
||||
zone=zone,
|
||||
project=self._get_project_info(project_id),
|
||||
)
|
||||
@ -644,6 +890,46 @@ class OpenStackCloud(
|
||||
"""
|
||||
return meta.get_and_munchify(key, data)
|
||||
|
||||
@_utils.cache_on_arguments()
|
||||
def list_projects(self, domain_id=None, name_or_id=None, filters=None):
|
||||
"""List projects.
|
||||
|
||||
With no parameters, returns a full listing of all visible projects.
|
||||
|
||||
:param domain_id: domain ID to scope the searched projects.
|
||||
:param name_or_id: project name or ID.
|
||||
:param filters: a dict containing additional filters to use
|
||||
OR
|
||||
A string containing a jmespath expression for further filtering.
|
||||
Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]"
|
||||
|
||||
:returns: a list of ``munch.Munch`` containing the projects
|
||||
|
||||
:raises: ``OpenStackCloudException``: if something goes wrong during
|
||||
the OpenStack API call.
|
||||
"""
|
||||
kwargs = dict(
|
||||
filters=filters,
|
||||
domain_id=domain_id)
|
||||
if self._is_client_version('identity', 3):
|
||||
kwargs['obj_name'] = 'project'
|
||||
|
||||
pushdown, filters = _normalize._split_filters(**kwargs)
|
||||
|
||||
try:
|
||||
if self._is_client_version('identity', 3):
|
||||
key = 'projects'
|
||||
else:
|
||||
key = 'tenants'
|
||||
data = self._identity_client.get(
|
||||
'/{endpoint}'.format(endpoint=key), params=pushdown)
|
||||
projects = self._normalize_projects(
|
||||
self._get_and_munchify(key, data))
|
||||
except Exception as e:
|
||||
self.log.debug("Failed to list projects", exc_info=True)
|
||||
raise exc.OpenStackCloudException(str(e))
|
||||
return _utils._filter_list(projects, name_or_id, filters)
|
||||
|
||||
def search_projects(self, name_or_id=None, filters=None, domain_id=None):
|
||||
'''Backwards compatibility method for search_projects
|
||||
|
||||
@ -1154,11 +1440,7 @@ class OpenStackCloud(
|
||||
return self.name
|
||||
|
||||
def get_region(self):
|
||||
return self.config.region_name
|
||||
|
||||
@property
|
||||
def region_name(self):
|
||||
return self.config.region_name
|
||||
return self.region_name
|
||||
|
||||
def get_flavor_name(self, flavor_id):
|
||||
flavor = self.get_flavor(flavor_id, get_extra=False)
|
||||
@ -1201,7 +1483,7 @@ class OpenStackCloud(
|
||||
" {error}".format(
|
||||
service=service_key,
|
||||
cloud=self.name,
|
||||
region=self.config.region_name,
|
||||
region=self.region_name,
|
||||
error=str(e)))
|
||||
return endpoint
|
||||
|
||||
@ -1692,11 +1974,29 @@ class OpenStackCloud(
|
||||
"""
|
||||
if get_extra is None:
|
||||
get_extra = self._extra_config['get_flavor_extra_specs']
|
||||
data = self._compute_client.get(
|
||||
'/flavors/detail', params=dict(is_public='None'),
|
||||
error_message="Error fetching flavor list")
|
||||
flavors = self._normalize_flavors(
|
||||
self._get_and_munchify('flavors', data))
|
||||
|
||||
# This method is already cache-decorated. We don't want to call the
|
||||
# decorated inner-method, we want to call the method it is decorating.
|
||||
return connection.Connection.list_flavors.func(
|
||||
self, get_extra=get_extra)
|
||||
for flavor in flavors:
|
||||
if not flavor.extra_specs and get_extra:
|
||||
endpoint = "/flavors/{id}/os-extra_specs".format(
|
||||
id=flavor.id)
|
||||
try:
|
||||
data = self._compute_client.get(
|
||||
endpoint,
|
||||
error_message="Error fetching flavor extra specs")
|
||||
flavor.extra_specs = self._get_and_munchify(
|
||||
'extra_specs', data)
|
||||
except exc.OpenStackCloudHTTPError as e:
|
||||
flavor.extra_specs = {}
|
||||
self.log.debug(
|
||||
'Fetching extra specs for flavor failed:'
|
||||
' %(msg)s', {'msg': str(e)})
|
||||
|
||||
return flavors
|
||||
|
||||
@_utils.cache_on_arguments(should_cache_fn=_no_pending_stacks)
|
||||
def list_stacks(self):
|
||||
@ -1920,7 +2220,7 @@ class OpenStackCloud(
|
||||
filters=None):
|
||||
error_msg = "Error fetching server list on {cloud}:{region}:".format(
|
||||
cloud=self.name,
|
||||
region=self.config.region_name)
|
||||
region=self.region_name)
|
||||
params = filters or {}
|
||||
if all_projects:
|
||||
params['all_tenants'] = True
|
||||
@ -2749,10 +3049,32 @@ class OpenStackCloud(
|
||||
specs.
|
||||
:returns: A flavor ``munch.Munch``.
|
||||
"""
|
||||
data = self._compute_client.get(
|
||||
'/flavors/{id}'.format(id=id),
|
||||
error_message="Error getting flavor with ID {id}".format(id=id)
|
||||
)
|
||||
flavor = self._normalize_flavor(
|
||||
self._get_and_munchify('flavor', data))
|
||||
|
||||
if get_extra is None:
|
||||
get_extra = self._extra_config['get_flavor_extra_specs']
|
||||
return super(OpenStackCloud, self).get_flavor_by_id(
|
||||
id, get_extra=get_extra)
|
||||
|
||||
if not flavor.extra_specs and get_extra:
|
||||
endpoint = "/flavors/{id}/os-extra_specs".format(
|
||||
id=flavor.id)
|
||||
try:
|
||||
data = self._compute_client.get(
|
||||
endpoint,
|
||||
error_message="Error fetching flavor extra specs")
|
||||
flavor.extra_specs = self._get_and_munchify(
|
||||
'extra_specs', data)
|
||||
except exc.OpenStackCloudHTTPError as e:
|
||||
flavor.extra_specs = {}
|
||||
self.log.debug(
|
||||
'Fetching extra specs for flavor failed:'
|
||||
' %(msg)s', {'msg': str(e)})
|
||||
|
||||
return flavor
|
||||
|
||||
def get_security_group(self, name_or_id, filters=None):
|
||||
"""Get a security group by name or ID.
|
||||
@ -4268,7 +4590,7 @@ class OpenStackCloud(
|
||||
|
||||
def wait_for_image(self, image, timeout=3600):
|
||||
image_id = image['id']
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout, "Timeout waiting for image to snapshot"):
|
||||
self.list_images.invalidate(self)
|
||||
image = self.get_image(image_id)
|
||||
@ -4307,7 +4629,7 @@ class OpenStackCloud(
|
||||
self.delete_object(container=container, name=objname)
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the image to be deleted."):
|
||||
self._get_cache(None).invalidate()
|
||||
@ -4537,7 +4859,7 @@ class OpenStackCloud(
|
||||
if not wait:
|
||||
return self.get_image(response['image_id'])
|
||||
try:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the image to finish."):
|
||||
image_obj = self.get_image(response['image_id'])
|
||||
@ -4631,7 +4953,7 @@ class OpenStackCloud(
|
||||
if not wait:
|
||||
return image
|
||||
try:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the image to finish."):
|
||||
image_obj = self.get_image(image.id)
|
||||
@ -4671,7 +4993,7 @@ class OpenStackCloud(
|
||||
if wait:
|
||||
start = time.time()
|
||||
image_id = None
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the image to import."):
|
||||
try:
|
||||
@ -4834,7 +5156,7 @@ class OpenStackCloud(
|
||||
|
||||
if wait:
|
||||
vol_id = volume['id']
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the volume to be available."):
|
||||
volume = self.get_volume(vol_id)
|
||||
@ -4921,7 +5243,7 @@ class OpenStackCloud(
|
||||
|
||||
self.list_volumes.invalidate(self)
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the volume to be deleted."):
|
||||
|
||||
@ -5009,7 +5331,7 @@ class OpenStackCloud(
|
||||
volume=volume['id'], server=server['id'])))
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for volume %s to detach." % volume['id']):
|
||||
try:
|
||||
@ -5077,7 +5399,7 @@ class OpenStackCloud(
|
||||
server_id=server['id']))
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for volume %s to attach." % volume['id']):
|
||||
try:
|
||||
@ -5152,7 +5474,7 @@ class OpenStackCloud(
|
||||
snapshot = self._get_and_munchify('snapshot', data)
|
||||
if wait:
|
||||
snapshot_id = snapshot['id']
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the volume snapshot to be available."
|
||||
):
|
||||
@ -5248,7 +5570,7 @@ class OpenStackCloud(
|
||||
backup_id = backup['id']
|
||||
msg = ("Timeout waiting for the volume backup {} to be "
|
||||
"available".format(backup_id))
|
||||
for _ in utils.iterate_timeout(timeout, msg):
|
||||
for _ in _utils._iterate_timeout(timeout, msg):
|
||||
backup = self.get_volume_backup(backup_id)
|
||||
|
||||
if backup['status'] == 'available':
|
||||
@ -5339,7 +5661,7 @@ class OpenStackCloud(
|
||||
error_message=msg)
|
||||
if wait:
|
||||
msg = "Timeout waiting for the volume backup to be deleted."
|
||||
for count in utils.iterate_timeout(timeout, msg):
|
||||
for count in _utils._iterate_timeout(timeout, msg):
|
||||
if not self.get_volume_backup(volume_backup['id']):
|
||||
break
|
||||
|
||||
@ -5369,7 +5691,7 @@ class OpenStackCloud(
|
||||
error_message="Error in deleting volume snapshot")
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the volume snapshot to be deleted."):
|
||||
if not self.get_volume_snapshot(volumesnapshot['id']):
|
||||
@ -5670,7 +5992,7 @@ class OpenStackCloud(
|
||||
# if we've provided a port as a parameter
|
||||
if wait:
|
||||
try:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the floating IP"
|
||||
" to be ACTIVE",
|
||||
@ -5876,7 +6198,7 @@ class OpenStackCloud(
|
||||
if wait:
|
||||
# Wait for the address to be assigned to the server
|
||||
server_id = server['id']
|
||||
for _ in utils.iterate_timeout(
|
||||
for _ in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for the floating IP to be attached.",
|
||||
wait=self._SERVER_AGE):
|
||||
@ -5908,7 +6230,7 @@ class OpenStackCloud(
|
||||
timeout = self._PORT_AGE * 2
|
||||
else:
|
||||
timeout = None
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for port to show up in list",
|
||||
wait=self._PORT_AGE):
|
||||
@ -6339,7 +6661,7 @@ class OpenStackCloud(
|
||||
'Volume {boot_volume} is not a valid volume'
|
||||
' in {cloud}:{region}'.format(
|
||||
boot_volume=boot_volume,
|
||||
cloud=self.name, region=self.config.region_name))
|
||||
cloud=self.name, region=self.region_name))
|
||||
block_mapping = {
|
||||
'boot_index': '0',
|
||||
'delete_on_termination': terminate_volume,
|
||||
@ -6360,7 +6682,7 @@ class OpenStackCloud(
|
||||
'Image {image} is not a valid image in'
|
||||
' {cloud}:{region}'.format(
|
||||
image=image,
|
||||
cloud=self.name, region=self.config.region_name))
|
||||
cloud=self.name, region=self.region_name))
|
||||
|
||||
block_mapping = {
|
||||
'boot_index': '0',
|
||||
@ -6390,7 +6712,7 @@ class OpenStackCloud(
|
||||
'Volume {volume} is not a valid volume'
|
||||
' in {cloud}:{region}'.format(
|
||||
volume=volume,
|
||||
cloud=self.name, region=self.config.region_name))
|
||||
cloud=self.name, region=self.region_name))
|
||||
block_mapping = {
|
||||
'boot_index': '-1',
|
||||
'delete_on_termination': False,
|
||||
@ -6582,7 +6904,7 @@ class OpenStackCloud(
|
||||
'Network {network} is not a valid network in'
|
||||
' {cloud}:{region}'.format(
|
||||
network=network,
|
||||
cloud=self.name, region=self.config.region_name))
|
||||
cloud=self.name, region=self.region_name))
|
||||
nics.append({'net-id': network_obj['id']})
|
||||
|
||||
kwargs['nics'] = nics
|
||||
@ -6694,7 +7016,7 @@ class OpenStackCloud(
|
||||
start_time = time.time()
|
||||
|
||||
# There is no point in iterating faster than the list_servers cache
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
timeout_message,
|
||||
# if _SERVER_AGE is 0 we still want to wait a bit
|
||||
@ -6784,7 +7106,7 @@ class OpenStackCloud(
|
||||
self._normalize_server(server), bare=bare, detailed=detailed)
|
||||
|
||||
admin_pass = server.get('adminPass') or admin_pass
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for server {0} to "
|
||||
"rebuild.".format(server_id),
|
||||
@ -6940,7 +7262,7 @@ class OpenStackCloud(
|
||||
and self.get_volumes(server)):
|
||||
reset_volume_cache = True
|
||||
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timed out waiting for server to get deleted.",
|
||||
# if _SERVER_AGE is 0 we still want to wait a bit
|
||||
@ -7267,6 +7589,118 @@ class OpenStackCloud(
|
||||
endpoint, filename, headers,
|
||||
file_size, segment_size, use_slo)
|
||||
|
||||
def _upload_object(self, endpoint, filename, headers):
|
||||
return self._object_store_client.put(
|
||||
endpoint, headers=headers, data=open(filename, 'r'))
|
||||
|
||||
def _get_file_segments(self, endpoint, filename, file_size, segment_size):
|
||||
# Use an ordered dict here so that testing can replicate things
|
||||
segments = collections.OrderedDict()
|
||||
for (index, offset) in enumerate(range(0, file_size, segment_size)):
|
||||
remaining = file_size - (index * segment_size)
|
||||
segment = _utils.FileSegment(
|
||||
filename, offset,
|
||||
segment_size if segment_size < remaining else remaining)
|
||||
name = '{endpoint}/{index:0>6}'.format(
|
||||
endpoint=endpoint, index=index)
|
||||
segments[name] = segment
|
||||
return segments
|
||||
|
||||
def _object_name_from_url(self, url):
|
||||
'''Get container_name/object_name from the full URL called.
|
||||
|
||||
Remove the Swift endpoint from the front of the URL, and remove
|
||||
the leaving / that will leave behind.'''
|
||||
endpoint = self._object_store_client.get_endpoint()
|
||||
object_name = url.replace(endpoint, '')
|
||||
if object_name.startswith('/'):
|
||||
object_name = object_name[1:]
|
||||
return object_name
|
||||
|
||||
def _add_etag_to_manifest(self, segment_results, manifest):
|
||||
for result in segment_results:
|
||||
if 'Etag' not in result.headers:
|
||||
continue
|
||||
name = self._object_name_from_url(result.url)
|
||||
for entry in manifest:
|
||||
if entry['path'] == '/{name}'.format(name=name):
|
||||
entry['etag'] = result.headers['Etag']
|
||||
|
||||
def _upload_large_object(
|
||||
self, endpoint, filename,
|
||||
headers, file_size, segment_size, use_slo):
|
||||
# If the object is big, we need to break it up into segments that
|
||||
# are no larger than segment_size, upload each of them individually
|
||||
# and then upload a manifest object. The segments can be uploaded in
|
||||
# parallel, so we'll use the async feature of the TaskManager.
|
||||
|
||||
segment_futures = []
|
||||
segment_results = []
|
||||
retry_results = []
|
||||
retry_futures = []
|
||||
manifest = []
|
||||
|
||||
# Get an OrderedDict with keys being the swift location for the
|
||||
# segment, the value a FileSegment file-like object that is a
|
||||
# slice of the data for the segment.
|
||||
segments = self._get_file_segments(
|
||||
endpoint, filename, file_size, segment_size)
|
||||
|
||||
# Schedule the segments for upload
|
||||
for name, segment in segments.items():
|
||||
# Async call to put - schedules execution and returns a future
|
||||
segment_future = self._object_store_client.put(
|
||||
name, headers=headers, data=segment, run_async=True)
|
||||
segment_futures.append(segment_future)
|
||||
# TODO(mordred) Collect etags from results to add to this manifest
|
||||
# dict. Then sort the list of dicts by path.
|
||||
manifest.append(dict(
|
||||
path='/{name}'.format(name=name),
|
||||
size_bytes=segment.length))
|
||||
|
||||
# Try once and collect failed results to retry
|
||||
segment_results, retry_results = task_manager.wait_for_futures(
|
||||
segment_futures, raise_on_error=False)
|
||||
|
||||
self._add_etag_to_manifest(segment_results, manifest)
|
||||
|
||||
for result in retry_results:
|
||||
# Grab the FileSegment for the failed upload so we can retry
|
||||
name = self._object_name_from_url(result.url)
|
||||
segment = segments[name]
|
||||
segment.seek(0)
|
||||
# Async call to put - schedules execution and returns a future
|
||||
segment_future = self._object_store_client.put(
|
||||
name, headers=headers, data=segment, run_async=True)
|
||||
# TODO(mordred) Collect etags from results to add to this manifest
|
||||
# dict. Then sort the list of dicts by path.
|
||||
retry_futures.append(segment_future)
|
||||
|
||||
# If any segments fail the second time, just throw the error
|
||||
segment_results, retry_results = task_manager.wait_for_futures(
|
||||
retry_futures, raise_on_error=True)
|
||||
|
||||
self._add_etag_to_manifest(segment_results, manifest)
|
||||
|
||||
if use_slo:
|
||||
return self._finish_large_object_slo(endpoint, headers, manifest)
|
||||
else:
|
||||
return self._finish_large_object_dlo(endpoint, headers)
|
||||
|
||||
def _finish_large_object_slo(self, endpoint, headers, manifest):
|
||||
# TODO(mordred) send an etag of the manifest, which is the md5sum
|
||||
# of the concatenation of the etags of the results
|
||||
headers = headers.copy()
|
||||
return self._object_store_client.put(
|
||||
endpoint,
|
||||
params={'multipart-manifest': 'put'},
|
||||
headers=headers, data=json.dumps(manifest))
|
||||
|
||||
def _finish_large_object_dlo(self, endpoint, headers):
|
||||
headers = headers.copy()
|
||||
headers['X-Object-Manifest'] = endpoint
|
||||
return self._object_store_client.put(endpoint, headers=headers)
|
||||
|
||||
def update_object(self, container, name, metadata=None, **headers):
|
||||
"""Update the metadata of an object
|
||||
|
||||
@ -8693,7 +9127,7 @@ class OpenStackCloud(
|
||||
with _utils.shade_exceptions("Error inspecting machine"):
|
||||
machine = self.node_set_provision_state(machine['uuid'], 'inspect')
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for node transition to "
|
||||
"target state of 'inspect'"):
|
||||
@ -8812,7 +9246,7 @@ class OpenStackCloud(
|
||||
with _utils.shade_exceptions(
|
||||
"Error transitioning node to available state"):
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for node transition to "
|
||||
"available state"):
|
||||
@ -8848,7 +9282,7 @@ class OpenStackCloud(
|
||||
# Note(TheJulia): We need to wait for the lock to clear
|
||||
# before we attempt to set the machine into provide state
|
||||
# which allows for the transition to available.
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
lock_timeout,
|
||||
"Timeout waiting for reservation to clear "
|
||||
"before setting provide state"):
|
||||
@ -8947,7 +9381,7 @@ class OpenStackCloud(
|
||||
microversion=version)
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for machine to be deleted"):
|
||||
if not self.get_machine(uuid):
|
||||
@ -9188,7 +9622,7 @@ class OpenStackCloud(
|
||||
error_message=msg,
|
||||
microversion=version)
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for node transition to "
|
||||
"target state of '%s'" % state):
|
||||
@ -9412,7 +9846,7 @@ class OpenStackCloud(
|
||||
else:
|
||||
msg = 'Waiting for lock to be released for node {node}'.format(
|
||||
node=node['uuid'])
|
||||
for count in utils.iterate_timeout(timeout, msg, 2):
|
||||
for count in _utils._iterate_timeout(timeout, msg, 2):
|
||||
current_node = self.get_machine(node['uuid'])
|
||||
if current_node['reservation'] is None:
|
||||
return
|
||||
@ -10560,7 +10994,7 @@ class OpenStackCloud(
|
||||
self._identity_client.put(url, error_message=error_msg)
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for role to be granted"):
|
||||
if self.list_role_assignments(filters=filters):
|
||||
@ -10639,7 +11073,7 @@ class OpenStackCloud(
|
||||
self._identity_client.delete(url, error_message=error_msg)
|
||||
|
||||
if wait:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
timeout,
|
||||
"Timeout waiting for role to be revoked"):
|
||||
if not self.list_role_assignments(filters=filters):
|
||||
|
334
shade/task_manager.py
Normal file
334
shade/task_manager.py
Normal file
@ -0,0 +1,334 @@
|
||||
# Copyright (C) 2011-2013 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
#
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
import concurrent.futures
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import types
|
||||
|
||||
import keystoneauth1.exceptions
|
||||
import six
|
||||
|
||||
from shade import _log
|
||||
from shade import exc
|
||||
from shade import meta
|
||||
|
||||
|
||||
def _is_listlike(obj):
|
||||
# NOTE(Shrews): Since the client API might decide to subclass one
|
||||
# of these result types, we use isinstance() here instead of type().
|
||||
return (
|
||||
isinstance(obj, list) or
|
||||
isinstance(obj, types.GeneratorType))
|
||||
|
||||
|
||||
def _is_objlike(obj):
|
||||
# NOTE(Shrews): Since the client API might decide to subclass one
|
||||
# of these result types, we use isinstance() here instead of type().
|
||||
return (
|
||||
not isinstance(obj, bool) and
|
||||
not isinstance(obj, int) and
|
||||
not isinstance(obj, float) and
|
||||
not isinstance(obj, six.string_types) and
|
||||
not isinstance(obj, set) and
|
||||
not isinstance(obj, tuple))
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class BaseTask(object):
|
||||
"""Represent a task to be performed on an OpenStack Cloud.
|
||||
|
||||
Some consumers need to inject things like rate-limiting or auditing
|
||||
around each external REST interaction. Task provides an interface
|
||||
to encapsulate each such interaction. Also, although shade itself
|
||||
operates normally in a single-threaded direct action manner, consuming
|
||||
programs may provide a multi-threaded TaskManager themselves. For that
|
||||
reason, Task uses threading events to ensure appropriate wait conditions.
|
||||
These should be a no-op in single-threaded applications.
|
||||
|
||||
A consumer is expected to overload the main method.
|
||||
|
||||
:param dict kw: Any args that are expected to be passed to something in
|
||||
the main payload at execution time.
|
||||
"""
|
||||
|
||||
def __init__(self, **kw):
|
||||
self._exception = None
|
||||
self._traceback = None
|
||||
self._result = None
|
||||
self._response = None
|
||||
self._finished = threading.Event()
|
||||
self.run_async = False
|
||||
self.args = kw
|
||||
self.name = type(self).__name__
|
||||
|
||||
@abc.abstractmethod
|
||||
def main(self, client):
|
||||
""" Override this method with the actual workload to be performed """
|
||||
|
||||
def done(self, result):
|
||||
self._result = result
|
||||
self._finished.set()
|
||||
|
||||
def exception(self, e, tb):
|
||||
self._exception = e
|
||||
self._traceback = tb
|
||||
self._finished.set()
|
||||
|
||||
def wait(self, raw=False):
|
||||
self._finished.wait()
|
||||
|
||||
if self._exception:
|
||||
six.reraise(type(self._exception), self._exception,
|
||||
self._traceback)
|
||||
|
||||
return self._result
|
||||
|
||||
def run(self, client):
|
||||
self._client = client
|
||||
try:
|
||||
# Retry one time if we get a retriable connection failure
|
||||
try:
|
||||
# Keep time for connection retrying logging
|
||||
start = time.time()
|
||||
self.done(self.main(client))
|
||||
except keystoneauth1.exceptions.RetriableConnectionFailure as e:
|
||||
end = time.time()
|
||||
dt = end - start
|
||||
if client.region_name:
|
||||
client.log.debug(str(e))
|
||||
client.log.debug(
|
||||
"Connection failure on %(cloud)s:%(region)s"
|
||||
" for %(name)s after %(secs)s seconds, retrying",
|
||||
{'cloud': client.name,
|
||||
'region': client.region_name,
|
||||
'secs': dt,
|
||||
'name': self.name})
|
||||
else:
|
||||
client.log.debug(
|
||||
"Connection failure on %(cloud)s for %(name)s after"
|
||||
" %(secs)s seconds, retrying",
|
||||
{'cloud': client.name, 'name': self.name, 'secs': dt})
|
||||
self.done(self.main(client))
|
||||
except Exception:
|
||||
raise
|
||||
except Exception as e:
|
||||
self.exception(e, sys.exc_info()[2])
|
||||
|
||||
|
||||
class Task(BaseTask):
|
||||
""" Shade specific additions to the BaseTask Interface. """
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(Task, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
|
||||
if _is_listlike(self._result):
|
||||
return meta.obj_list_to_munch(self._result)
|
||||
elif _is_objlike(self._result):
|
||||
return meta.obj_to_munch(self._result)
|
||||
else:
|
||||
return self._result
|
||||
|
||||
|
||||
class RequestTask(BaseTask):
|
||||
""" Extensions to the Shade Tasks to handle raw requests """
|
||||
|
||||
# It's totally legit for calls to not return things
|
||||
result_key = None
|
||||
|
||||
# keystoneauth1 throws keystoneauth1.exceptions.http.HttpError on !200
|
||||
def done(self, result):
|
||||
self._response = result
|
||||
|
||||
try:
|
||||
result_json = self._response.json()
|
||||
except Exception as e:
|
||||
result_json = self._response.text
|
||||
self._client.log.debug(
|
||||
'Could not decode json in response: %(e)s', {'e': str(e)})
|
||||
self._client.log.debug(result_json)
|
||||
|
||||
if self.result_key:
|
||||
self._result = result_json[self.result_key]
|
||||
else:
|
||||
self._result = result_json
|
||||
|
||||
self._request_id = self._response.headers.get('x-openstack-request-id')
|
||||
self._finished.set()
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(RequestTask, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
|
||||
if _is_listlike(self._result):
|
||||
return meta.obj_list_to_munch(
|
||||
self._result, request_id=self._request_id)
|
||||
elif _is_objlike(self._result):
|
||||
return meta.obj_to_munch(self._result, request_id=self._request_id)
|
||||
return self._result
|
||||
|
||||
|
||||
def _result_filter_cb(result):
|
||||
return result
|
||||
|
||||
|
||||
def generate_task_class(method, name, result_filter_cb):
|
||||
if name is None:
|
||||
if callable(method):
|
||||
name = method.__name__
|
||||
else:
|
||||
name = method
|
||||
|
||||
class RunTask(Task):
|
||||
def __init__(self, **kw):
|
||||
super(RunTask, self).__init__(**kw)
|
||||
self.name = name
|
||||
self._method = method
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(RunTask, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
return result_filter_cb(self._result)
|
||||
|
||||
def main(self, client):
|
||||
if callable(self._method):
|
||||
return method(**self.args)
|
||||
else:
|
||||
meth = getattr(client, self._method)
|
||||
return meth(**self.args)
|
||||
return RunTask
|
||||
|
||||
|
||||
class TaskManager(object):
|
||||
log = _log.setup_logging('shade.task_manager')
|
||||
|
||||
def __init__(
|
||||
self, client, name, result_filter_cb=None, workers=5, **kwargs):
|
||||
self.name = name
|
||||
self._client = client
|
||||
self._executor = concurrent.futures.ThreadPoolExecutor(
|
||||
max_workers=workers)
|
||||
if not result_filter_cb:
|
||||
self._result_filter_cb = _result_filter_cb
|
||||
else:
|
||||
self._result_filter_cb = result_filter_cb
|
||||
|
||||
def set_client(self, client):
|
||||
self._client = client
|
||||
|
||||
def stop(self):
|
||||
""" This is a direct action passthrough TaskManager """
|
||||
self._executor.shutdown(wait=True)
|
||||
|
||||
def run(self):
|
||||
""" This is a direct action passthrough TaskManager """
|
||||
pass
|
||||
|
||||
def submit_task(self, task, raw=False):
|
||||
"""Submit and execute the given task.
|
||||
|
||||
:param task: The task to execute.
|
||||
:param bool raw: If True, return the raw result as received from the
|
||||
underlying client call.
|
||||
"""
|
||||
return self.run_task(task=task, raw=raw)
|
||||
|
||||
def _run_task_async(self, task, raw=False):
|
||||
self.log.debug(
|
||||
"Manager %s submitting task %s", self.name, task.name)
|
||||
return self._executor.submit(self._run_task, task, raw=raw)
|
||||
|
||||
def run_task(self, task, raw=False):
|
||||
if hasattr(task, 'run_async') and task.run_async:
|
||||
return self._run_task_async(task, raw=raw)
|
||||
else:
|
||||
return self._run_task(task, raw=raw)
|
||||
|
||||
def _run_task(self, task, raw=False):
|
||||
self.log.debug(
|
||||
"Manager %s running task %s", self.name, task.name)
|
||||
start = time.time()
|
||||
task.run(self._client)
|
||||
end = time.time()
|
||||
dt = end - start
|
||||
self.log.debug(
|
||||
"Manager %s ran task %s in %ss", self.name, task.name, dt)
|
||||
|
||||
self.post_run_task(dt, task)
|
||||
|
||||
return task.wait(raw)
|
||||
|
||||
def post_run_task(self, elasped_time, task):
|
||||
pass
|
||||
|
||||
# Backwards compatibility
|
||||
submitTask = submit_task
|
||||
|
||||
def submit_function(
|
||||
self, method, name=None, result_filter_cb=None, **kwargs):
|
||||
""" Allows submitting an arbitrary method for work.
|
||||
|
||||
:param method: Method to run in the TaskManager. Can be either the
|
||||
name of a method to find on self.client, or a callable.
|
||||
"""
|
||||
if not result_filter_cb:
|
||||
result_filter_cb = self._result_filter_cb
|
||||
|
||||
task_class = generate_task_class(method, name, result_filter_cb)
|
||||
|
||||
return self._executor.submit_task(task_class(**kwargs))
|
||||
|
||||
|
||||
def wait_for_futures(futures, raise_on_error=True, log=None):
|
||||
'''Collect results or failures from a list of running future tasks.'''
|
||||
|
||||
results = []
|
||||
retries = []
|
||||
|
||||
# Check on each result as its thread finishes
|
||||
for completed in concurrent.futures.as_completed(futures):
|
||||
try:
|
||||
result = completed.result()
|
||||
# We have to do this here because munch_response doesn't
|
||||
# get called on async job results
|
||||
exc.raise_from_response(result)
|
||||
results.append(result)
|
||||
except (keystoneauth1.exceptions.RetriableConnectionFailure,
|
||||
exc.OpenStackCloudException) as e:
|
||||
if log:
|
||||
log.debug(
|
||||
"Exception processing async task: {e}".format(
|
||||
e=str(e)),
|
||||
exc_info=True)
|
||||
# If we get an exception, put the result into a list so we
|
||||
# can try again
|
||||
if raise_on_error:
|
||||
raise
|
||||
else:
|
||||
retries.append(result)
|
||||
return results, retries
|
@ -12,7 +12,7 @@
|
||||
|
||||
import os
|
||||
|
||||
import openstack.config as occ
|
||||
import os_client_config as occ
|
||||
|
||||
import shade
|
||||
from shade.tests import base
|
||||
|
@ -20,11 +20,10 @@ Functional tests for `shade` compute methods.
|
||||
from fixtures import TimeoutException
|
||||
import six
|
||||
|
||||
from openstack import utils
|
||||
|
||||
from shade import exc
|
||||
from shade.tests.functional import base
|
||||
from shade.tests.functional.util import pick_flavor
|
||||
from shade import _utils
|
||||
|
||||
|
||||
class TestCompute(base.BaseFunctionalTestCase):
|
||||
@ -292,7 +291,7 @@ class TestCompute(base.BaseFunctionalTestCase):
|
||||
# Volumes do not show up as unattached for a bit immediately after
|
||||
# deleting a server that had had a volume attached. Yay for eventual
|
||||
# consistency!
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
60,
|
||||
'Timeout waiting for volume {volume_id} to detach'.format(
|
||||
volume_id=volume_id)):
|
||||
|
@ -21,9 +21,9 @@ Functional tests for floating IP resource.
|
||||
|
||||
import pprint
|
||||
|
||||
from openstack import utils
|
||||
from testtools import content
|
||||
|
||||
from shade import _utils
|
||||
from shade import meta
|
||||
from shade.exc import OpenStackCloudException
|
||||
from shade.tests.functional import base
|
||||
@ -193,7 +193,7 @@ class TestFloatingIP(base.BaseFunctionalTestCase):
|
||||
# ToDo: remove the following iteration when create_server waits for
|
||||
# the IP to be attached
|
||||
ip = None
|
||||
for _ in utils.iterate_timeout(
|
||||
for _ in _utils._iterate_timeout(
|
||||
self.timeout, "Timeout waiting for IP address to be attached"):
|
||||
ip = meta.get_server_external_ipv4(self.user_cloud, new_server)
|
||||
if ip is not None:
|
||||
@ -213,7 +213,7 @@ class TestFloatingIP(base.BaseFunctionalTestCase):
|
||||
# ToDo: remove the following iteration when create_server waits for
|
||||
# the IP to be attached
|
||||
ip = None
|
||||
for _ in utils.iterate_timeout(
|
||||
for _ in _utils._iterate_timeout(
|
||||
self.timeout, "Timeout waiting for IP address to be attached"):
|
||||
ip = meta.get_server_external_ipv4(self.user_cloud, new_server)
|
||||
if ip is not None:
|
||||
|
@ -18,9 +18,9 @@ Functional tests for `shade` block storage methods.
|
||||
"""
|
||||
|
||||
from fixtures import TimeoutException
|
||||
from openstack import utils
|
||||
from testtools import content
|
||||
|
||||
from shade import _utils
|
||||
from shade import exc
|
||||
from shade.tests.functional import base
|
||||
|
||||
@ -107,7 +107,7 @@ class TestVolume(base.BaseFunctionalTestCase):
|
||||
for v in volume:
|
||||
self.user_cloud.delete_volume(v, wait=False)
|
||||
try:
|
||||
for count in utils.iterate_timeout(
|
||||
for count in _utils._iterate_timeout(
|
||||
180, "Timeout waiting for volume cleanup"):
|
||||
found = False
|
||||
for existing in self.user_cloud.list_volumes():
|
||||
|
@ -20,7 +20,7 @@ import uuid
|
||||
import fixtures
|
||||
import mock
|
||||
import os
|
||||
import openstack.config as occ
|
||||
import os_client_config as occ
|
||||
from requests import structures
|
||||
from requests_mock.contrib import fixture as rm_fixture
|
||||
from six.moves import urllib
|
||||
@ -140,7 +140,7 @@ class TestCase(BaseTestCase):
|
||||
|
||||
super(TestCase, self).setUp(cloud_config_fixture=cloud_config_fixture)
|
||||
self.session_fixture = self.useFixture(fixtures.MonkeyPatch(
|
||||
'openstack.config.cloud_region.CloudRegion.get_session',
|
||||
'os_client_config.cloud_config.CloudConfig.get_session',
|
||||
mock.Mock()))
|
||||
|
||||
|
||||
|
38
shade/tests/unit/test__adapter.py
Normal file
38
shade/tests/unit/test__adapter.py
Normal file
@ -0,0 +1,38 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from testscenarios import load_tests_apply_scenarios as load_tests # noqa
|
||||
|
||||
from shade import _adapter
|
||||
from shade.tests.unit import base
|
||||
|
||||
|
||||
class TestExtractName(base.TestCase):
|
||||
|
||||
scenarios = [
|
||||
('slash_servers_bare', dict(url='/servers', parts=['servers'])),
|
||||
('slash_servers_arg', dict(url='/servers/1', parts=['servers'])),
|
||||
('servers_bare', dict(url='servers', parts=['servers'])),
|
||||
('servers_arg', dict(url='servers/1', parts=['servers'])),
|
||||
('networks_bare', dict(url='/v2.0/networks', parts=['networks'])),
|
||||
('networks_arg', dict(url='/v2.0/networks/1', parts=['networks'])),
|
||||
('tokens', dict(url='/v3/tokens', parts=['tokens'])),
|
||||
('discovery', dict(url='/', parts=['discovery'])),
|
||||
('secgroups', dict(
|
||||
url='/servers/1/os-security-groups',
|
||||
parts=['servers', 'os-security-groups'])),
|
||||
]
|
||||
|
||||
def test_extract_name(self):
|
||||
|
||||
results = _adapter.extract_name(self.url)
|
||||
self.assertEqual(self.parts, results)
|
385
shade/tests/unit/test__utils.py
Normal file
385
shade/tests/unit/test__utils.py
Normal file
@ -0,0 +1,385 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import random
|
||||
import string
|
||||
import tempfile
|
||||
from uuid import uuid4
|
||||
|
||||
import mock
|
||||
import testtools
|
||||
|
||||
from shade import _utils
|
||||
from shade import exc
|
||||
from shade.tests.unit import base
|
||||
|
||||
|
||||
RANGE_DATA = [
|
||||
dict(id=1, key1=1, key2=5),
|
||||
dict(id=2, key1=1, key2=20),
|
||||
dict(id=3, key1=2, key2=10),
|
||||
dict(id=4, key1=2, key2=30),
|
||||
dict(id=5, key1=3, key2=40),
|
||||
dict(id=6, key1=3, key2=40),
|
||||
]
|
||||
|
||||
|
||||
class TestUtils(base.TestCase):
|
||||
|
||||
def test__filter_list_name_or_id(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'donald', None)
|
||||
self.assertEqual([el1], ret)
|
||||
|
||||
def test__filter_list_name_or_id_special(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto[2017-01-10]')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'pluto[2017-01-10]', None)
|
||||
self.assertEqual([el2], ret)
|
||||
|
||||
def test__filter_list_name_or_id_partial_bad(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto[2017-01-10]')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'pluto[2017-01]', None)
|
||||
self.assertEqual([], ret)
|
||||
|
||||
def test__filter_list_name_or_id_partial_glob(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto[2017-01-10]')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'pluto*', None)
|
||||
self.assertEqual([el2], ret)
|
||||
|
||||
def test__filter_list_name_or_id_non_glob_glob(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto[2017-01-10]')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'pluto', None)
|
||||
self.assertEqual([], ret)
|
||||
|
||||
def test__filter_list_name_or_id_glob(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto')
|
||||
el3 = dict(id=200, name='pluto-2')
|
||||
data = [el1, el2, el3]
|
||||
ret = _utils._filter_list(data, 'pluto*', None)
|
||||
self.assertEqual([el2, el3], ret)
|
||||
|
||||
def test__filter_list_name_or_id_glob_not_found(self):
|
||||
el1 = dict(id=100, name='donald')
|
||||
el2 = dict(id=200, name='pluto')
|
||||
el3 = dict(id=200, name='pluto-2')
|
||||
data = [el1, el2, el3]
|
||||
ret = _utils._filter_list(data, 'q*', None)
|
||||
self.assertEqual([], ret)
|
||||
|
||||
def test__filter_list_unicode(self):
|
||||
el1 = dict(id=100, name=u'中文', last='duck',
|
||||
other=dict(category='duck', financial=dict(status='poor')))
|
||||
el2 = dict(id=200, name=u'中文', last='trump',
|
||||
other=dict(category='human', financial=dict(status='rich')))
|
||||
el3 = dict(id=300, name='donald', last='ronald mac',
|
||||
other=dict(category='clown', financial=dict(status='rich')))
|
||||
data = [el1, el2, el3]
|
||||
ret = _utils._filter_list(
|
||||
data, u'中文',
|
||||
{'other': {
|
||||
'financial': {'status': 'rich'}
|
||||
}})
|
||||
self.assertEqual([el2], ret)
|
||||
|
||||
def test__filter_list_filter(self):
|
||||
el1 = dict(id=100, name='donald', other='duck')
|
||||
el2 = dict(id=200, name='donald', other='trump')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'donald', {'other': 'duck'})
|
||||
self.assertEqual([el1], ret)
|
||||
|
||||
def test__filter_list_filter_jmespath(self):
|
||||
el1 = dict(id=100, name='donald', other='duck')
|
||||
el2 = dict(id=200, name='donald', other='trump')
|
||||
data = [el1, el2]
|
||||
ret = _utils._filter_list(data, 'donald', "[?other == `duck`]")
|
||||
self.assertEqual([el1], ret)
|
||||
|
||||
def test__filter_list_dict1(self):
|
||||
el1 = dict(id=100, name='donald', last='duck',
|
||||
other=dict(category='duck'))
|
||||
el2 = dict(id=200, name='donald', last='trump',
|
||||
other=dict(category='human'))
|
||||
el3 = dict(id=300, name='donald', last='ronald mac',
|
||||
other=dict(category='clown'))
|
||||
data = [el1, el2, el3]
|
||||
ret = _utils._filter_list(
|
||||
data, 'donald', {'other': {'category': 'clown'}})
|
||||
self.assertEqual([el3], ret)
|
||||
|
||||
def test__filter_list_dict2(self):
|
||||
el1 = dict(id=100, name='donald', last='duck',
|
||||
other=dict(category='duck', financial=dict(status='poor')))
|
||||
el2 = dict(id=200, name='donald', last='trump',
|
||||
other=dict(category='human', financial=dict(status='rich')))
|
||||
el3 = dict(id=300, name='donald', last='ronald mac',
|
||||
other=dict(category='clown', financial=dict(status='rich')))
|
||||
data = [el1, el2, el3]
|
||||
ret = _utils._filter_list(
|
||||
data, 'donald',
|
||||
{'other': {
|
||||
'financial': {'status': 'rich'}
|
||||
}})
|
||||
self.assertEqual([el2, el3], ret)
|
||||
|
||||
def test_safe_dict_min_ints(self):
|
||||
"""Test integer comparison"""
|
||||
data = [{'f1': 3}, {'f1': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_min('f1', data)
|
||||
self.assertEqual(1, retval)
|
||||
|
||||
def test_safe_dict_min_strs(self):
|
||||
"""Test integer as strings comparison"""
|
||||
data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}]
|
||||
retval = _utils.safe_dict_min('f1', data)
|
||||
self.assertEqual(1, retval)
|
||||
|
||||
def test_safe_dict_min_None(self):
|
||||
"""Test None values"""
|
||||
data = [{'f1': 3}, {'f1': None}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_min('f1', data)
|
||||
self.assertEqual(1, retval)
|
||||
|
||||
def test_safe_dict_min_key_missing(self):
|
||||
"""Test missing key for an entry still works"""
|
||||
data = [{'f1': 3}, {'x': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_min('f1', data)
|
||||
self.assertEqual(1, retval)
|
||||
|
||||
def test_safe_dict_min_key_not_found(self):
|
||||
"""Test key not found in any elements returns None"""
|
||||
data = [{'f1': 3}, {'f1': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_min('doesnotexist', data)
|
||||
self.assertIsNone(retval)
|
||||
|
||||
def test_safe_dict_min_not_int(self):
|
||||
"""Test non-integer key value raises OSCE"""
|
||||
data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}]
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Search for minimum value failed. "
|
||||
"Value for f1 is not an integer: aaa"
|
||||
):
|
||||
_utils.safe_dict_min('f1', data)
|
||||
|
||||
def test_safe_dict_max_ints(self):
|
||||
"""Test integer comparison"""
|
||||
data = [{'f1': 3}, {'f1': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_max('f1', data)
|
||||
self.assertEqual(3, retval)
|
||||
|
||||
def test_safe_dict_max_strs(self):
|
||||
"""Test integer as strings comparison"""
|
||||
data = [{'f1': '3'}, {'f1': '2'}, {'f1': '1'}]
|
||||
retval = _utils.safe_dict_max('f1', data)
|
||||
self.assertEqual(3, retval)
|
||||
|
||||
def test_safe_dict_max_None(self):
|
||||
"""Test None values"""
|
||||
data = [{'f1': 3}, {'f1': None}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_max('f1', data)
|
||||
self.assertEqual(3, retval)
|
||||
|
||||
def test_safe_dict_max_key_missing(self):
|
||||
"""Test missing key for an entry still works"""
|
||||
data = [{'f1': 3}, {'x': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_max('f1', data)
|
||||
self.assertEqual(3, retval)
|
||||
|
||||
def test_safe_dict_max_key_not_found(self):
|
||||
"""Test key not found in any elements returns None"""
|
||||
data = [{'f1': 3}, {'f1': 2}, {'f1': 1}]
|
||||
retval = _utils.safe_dict_max('doesnotexist', data)
|
||||
self.assertIsNone(retval)
|
||||
|
||||
def test_safe_dict_max_not_int(self):
|
||||
"""Test non-integer key value raises OSCE"""
|
||||
data = [{'f1': 3}, {'f1': "aaa"}, {'f1': 1}]
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Search for maximum value failed. "
|
||||
"Value for f1 is not an integer: aaa"
|
||||
):
|
||||
_utils.safe_dict_max('f1', data)
|
||||
|
||||
def test_parse_range_None(self):
|
||||
self.assertIsNone(_utils.parse_range(None))
|
||||
|
||||
def test_parse_range_invalid(self):
|
||||
self.assertIsNone(_utils.parse_range("<invalid"))
|
||||
|
||||
def test_parse_range_int_only(self):
|
||||
retval = _utils.parse_range("1024")
|
||||
self.assertIsInstance(retval, tuple)
|
||||
self.assertIsNone(retval[0])
|
||||
self.assertEqual(1024, retval[1])
|
||||
|
||||
def test_parse_range_lt(self):
|
||||
retval = _utils.parse_range("<1024")
|
||||
self.assertIsInstance(retval, tuple)
|
||||
self.assertEqual("<", retval[0])
|
||||
self.assertEqual(1024, retval[1])
|
||||
|
||||
def test_parse_range_gt(self):
|
||||
retval = _utils.parse_range(">1024")
|
||||
self.assertIsInstance(retval, tuple)
|
||||
self.assertEqual(">", retval[0])
|
||||
self.assertEqual(1024, retval[1])
|
||||
|
||||
def test_parse_range_le(self):
|
||||
retval = _utils.parse_range("<=1024")
|
||||
self.assertIsInstance(retval, tuple)
|
||||
self.assertEqual("<=", retval[0])
|
||||
self.assertEqual(1024, retval[1])
|
||||
|
||||
def test_parse_range_ge(self):
|
||||
retval = _utils.parse_range(">=1024")
|
||||
self.assertIsInstance(retval, tuple)
|
||||
self.assertEqual(">=", retval[0])
|
||||
self.assertEqual(1024, retval[1])
|
||||
|
||||
def test_range_filter_min(self):
|
||||
retval = _utils.range_filter(RANGE_DATA, "key1", "min")
|
||||
self.assertIsInstance(retval, list)
|
||||
self.assertEqual(2, len(retval))
|
||||
self.assertEqual(RANGE_DATA[:2], retval)
|
||||
|
||||
def test_range_filter_max(self):
|
||||
retval = _utils.range_filter(RANGE_DATA, "key1", "max")
|
||||
self.assertIsInstance(retval, list)
|
||||
self.assertEqual(2, len(retval))
|
||||
self.assertEqual(RANGE_DATA[-2:], retval)
|
||||
|
||||
def test_range_filter_range(self):
|
||||
retval = _utils.range_filter(RANGE_DATA, "key1", "<3")
|
||||
self.assertIsInstance(retval, list)
|
||||
self.assertEqual(4, len(retval))
|
||||
self.assertEqual(RANGE_DATA[:4], retval)
|
||||
|
||||
def test_range_filter_exact(self):
|
||||
retval = _utils.range_filter(RANGE_DATA, "key1", "2")
|
||||
self.assertIsInstance(retval, list)
|
||||
self.assertEqual(2, len(retval))
|
||||
self.assertEqual(RANGE_DATA[2:4], retval)
|
||||
|
||||
def test_range_filter_invalid_int(self):
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Invalid range value: <1A0"
|
||||
):
|
||||
_utils.range_filter(RANGE_DATA, "key1", "<1A0")
|
||||
|
||||
def test_range_filter_invalid_op(self):
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Invalid range value: <>100"
|
||||
):
|
||||
_utils.range_filter(RANGE_DATA, "key1", "<>100")
|
||||
|
||||
def test_file_segment(self):
|
||||
file_size = 4200
|
||||
content = ''.join(random.SystemRandom().choice(
|
||||
string.ascii_uppercase + string.digits)
|
||||
for _ in range(file_size)).encode('latin-1')
|
||||
self.imagefile = tempfile.NamedTemporaryFile(delete=False)
|
||||
self.imagefile.write(content)
|
||||
self.imagefile.close()
|
||||
|
||||
segments = self.cloud._get_file_segments(
|
||||
endpoint='test_container/test_image',
|
||||
filename=self.imagefile.name,
|
||||
file_size=file_size,
|
||||
segment_size=1000)
|
||||
self.assertEqual(len(segments), 5)
|
||||
segment_content = b''
|
||||
for (index, (name, segment)) in enumerate(segments.items()):
|
||||
self.assertEqual(
|
||||
'test_container/test_image/{index:0>6}'.format(index=index),
|
||||
name)
|
||||
segment_content += segment.read()
|
||||
self.assertEqual(content, segment_content)
|
||||
|
||||
def test_get_entity_pass_object(self):
|
||||
obj = mock.Mock(id=uuid4().hex)
|
||||
self.cloud.use_direct_get = True
|
||||
self.assertEqual(obj, _utils._get_entity(self.cloud, '', obj, {}))
|
||||
|
||||
def test_get_entity_pass_dict(self):
|
||||
d = dict(id=uuid4().hex)
|
||||
self.cloud.use_direct_get = True
|
||||
self.assertEqual(d, _utils._get_entity(self.cloud, '', d, {}))
|
||||
|
||||
def test_get_entity_no_use_direct_get(self):
|
||||
# test we are defaulting to the search_<resource> methods
|
||||
# if the use_direct_get flag is set to False(default).
|
||||
uuid = uuid4().hex
|
||||
resource = 'network'
|
||||
func = 'search_%ss' % resource
|
||||
filters = {}
|
||||
with mock.patch.object(self.cloud, func) as search:
|
||||
_utils._get_entity(self.cloud, resource, uuid, filters)
|
||||
search.assert_called_once_with(uuid, filters)
|
||||
|
||||
def test_get_entity_no_uuid_like(self):
|
||||
# test we are defaulting to the search_<resource> methods
|
||||
# if the name_or_id param is a name(string) but not a uuid.
|
||||
self.cloud.use_direct_get = True
|
||||
name = 'name_no_uuid'
|
||||
resource = 'network'
|
||||
func = 'search_%ss' % resource
|
||||
filters = {}
|
||||
with mock.patch.object(self.cloud, func) as search:
|
||||
_utils._get_entity(self.cloud, resource, name, filters)
|
||||
search.assert_called_once_with(name, filters)
|
||||
|
||||
def test_get_entity_pass_uuid(self):
|
||||
uuid = uuid4().hex
|
||||
self.cloud.use_direct_get = True
|
||||
resources = ['flavor', 'image', 'volume', 'network',
|
||||
'subnet', 'port', 'floating_ip', 'security_group']
|
||||
for r in resources:
|
||||
f = 'get_%s_by_id' % r
|
||||
with mock.patch.object(self.cloud, f) as get:
|
||||
_utils._get_entity(self.cloud, r, uuid, {})
|
||||
get.assert_called_once_with(uuid)
|
||||
|
||||
def test_get_entity_pass_search_methods(self):
|
||||
self.cloud.use_direct_get = True
|
||||
resources = ['flavor', 'image', 'volume', 'network',
|
||||
'subnet', 'port', 'floating_ip', 'security_group']
|
||||
filters = {}
|
||||
name = 'name_no_uuid'
|
||||
for r in resources:
|
||||
f = 'search_%ss' % r
|
||||
with mock.patch.object(self.cloud, f) as search:
|
||||
_utils._get_entity(self.cloud, r, name, {})
|
||||
search.assert_called_once_with(name, filters)
|
||||
|
||||
def test_get_entity_get_and_search(self):
|
||||
resources = ['flavor', 'image', 'volume', 'network',
|
||||
'subnet', 'port', 'floating_ip', 'security_group']
|
||||
for r in resources:
|
||||
self.assertTrue(hasattr(self.cloud, 'get_%s_by_id' % r))
|
||||
self.assertTrue(hasattr(self.cloud, 'search_%ss' % r))
|
@ -871,7 +871,7 @@ class TestBaremetalNode(base.IronicTestCase):
|
||||
])
|
||||
self.assertRaisesRegex(
|
||||
exc.OpenStackCloudException,
|
||||
'^Baremetal .* to dummy.*/states/provision.*invalid state',
|
||||
'^Baremetal .* to dummy.*/states/provision invalid state$',
|
||||
self.op_cloud.node_set_provision_state,
|
||||
self.fake_baremetal_node['uuid'],
|
||||
'dummy')
|
||||
@ -891,7 +891,7 @@ class TestBaremetalNode(base.IronicTestCase):
|
||||
])
|
||||
self.assertRaisesRegex(
|
||||
exc.OpenStackCloudException,
|
||||
'^Baremetal .* to dummy.*/states/provision',
|
||||
'^Baremetal .* to dummy.*/states/provision$',
|
||||
self.op_cloud.node_set_provision_state,
|
||||
self.fake_baremetal_node['uuid'],
|
||||
'dummy')
|
||||
|
@ -15,7 +15,6 @@
|
||||
|
||||
import uuid
|
||||
|
||||
import openstack.exceptions
|
||||
import testtools
|
||||
from testtools import matchers
|
||||
|
||||
@ -204,7 +203,7 @@ class TestDomains(base.RequestsMockTestCase):
|
||||
json=domain_data.json_response,
|
||||
validate=dict(json={'domain': {'enabled': False}}))])
|
||||
with testtools.ExpectedException(
|
||||
openstack.exceptions.ConflictException,
|
||||
shade.OpenStackCloudHTTPError,
|
||||
"Error in updating domain %s" % domain_data.domain_id
|
||||
):
|
||||
self.op_cloud.delete_domain(domain_data.domain_id)
|
||||
|
@ -12,9 +12,9 @@
|
||||
|
||||
|
||||
import mock
|
||||
from openstack.config import loader
|
||||
import os_client_config
|
||||
|
||||
from openstack import exceptions as os_exc
|
||||
from os_client_config import exceptions as occ_exc
|
||||
|
||||
from shade import exc
|
||||
from shade import inventory
|
||||
@ -27,7 +27,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
def setUp(self):
|
||||
super(TestInventory, self).setUp()
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test__init(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_all_clouds.return_value = [{}]
|
||||
@ -35,13 +35,13 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
inv = inventory.OpenStackInventory()
|
||||
|
||||
mock_config.assert_called_once_with(
|
||||
config_files=loader.CONFIG_FILES
|
||||
config_files=os_client_config.config.CONFIG_FILES
|
||||
)
|
||||
self.assertIsInstance(inv.clouds, list)
|
||||
self.assertEqual(1, len(inv.clouds))
|
||||
self.assertTrue(mock_config.return_value.get_all_clouds.called)
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test__init_one_cloud(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_one_cloud.return_value = [{}]
|
||||
@ -49,7 +49,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
inv = inventory.OpenStackInventory(cloud='supercloud')
|
||||
|
||||
mock_config.assert_called_once_with(
|
||||
config_files=loader.CONFIG_FILES
|
||||
config_files=os_client_config.config.CONFIG_FILES
|
||||
)
|
||||
self.assertIsInstance(inv.clouds, list)
|
||||
self.assertEqual(1, len(inv.clouds))
|
||||
@ -57,7 +57,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
mock_config.return_value.get_one_cloud.assert_called_once_with(
|
||||
'supercloud')
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test__raise_exception_on_no_cloud(self, mock_cloud, mock_config):
|
||||
"""
|
||||
@ -65,7 +65,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
shade exception is emitted.
|
||||
"""
|
||||
mock_config.return_value.get_one_cloud.side_effect = (
|
||||
os_exc.ConfigException()
|
||||
occ_exc.OpenStackConfigException()
|
||||
)
|
||||
self.assertRaises(exc.OpenStackCloudException,
|
||||
inventory.OpenStackInventory,
|
||||
@ -73,7 +73,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
mock_config.return_value.get_one_cloud.assert_called_once_with(
|
||||
'supercloud')
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test_list_hosts(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_all_clouds.return_value = [{}]
|
||||
@ -92,7 +92,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
self.assertFalse(inv.clouds[0].get_openstack_vars.called)
|
||||
self.assertEqual([server], ret)
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test_list_hosts_no_detail(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_all_clouds.return_value = [{}]
|
||||
@ -111,7 +111,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
inv.clouds[0].list_servers.assert_called_once_with(detailed=False)
|
||||
self.assertFalse(inv.clouds[0].get_openstack_vars.called)
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test_search_hosts(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_all_clouds.return_value = [{}]
|
||||
@ -127,7 +127,7 @@ class TestInventory(base.RequestsMockTestCase):
|
||||
ret = inv.search_hosts('server_id')
|
||||
self.assertEqual([server], ret)
|
||||
|
||||
@mock.patch("openstack.config.loader.OpenStackConfig")
|
||||
@mock.patch("os_client_config.config.OpenStackConfig")
|
||||
@mock.patch("shade.OpenStackCloud")
|
||||
def test_get_host(self, mock_cloud, mock_config):
|
||||
mock_config.return_value.get_all_clouds.return_value = [{}]
|
||||
|
@ -14,6 +14,7 @@ import testtools
|
||||
from testtools import matchers
|
||||
|
||||
import shade
|
||||
import shade._utils
|
||||
from shade.tests.unit import base
|
||||
|
||||
|
||||
|
@ -16,6 +16,7 @@ import uuid
|
||||
import testtools
|
||||
|
||||
import shade
|
||||
from shade import _utils
|
||||
from shade import exc
|
||||
from shade.tests import fakes
|
||||
from shade.tests.unit import base
|
||||
@ -378,6 +379,40 @@ class TestShade(base.RequestsMockTestCase):
|
||||
|
||||
self.assert_calls()
|
||||
|
||||
def test_iterate_timeout_bad_wait(self):
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Wait value must be an int or float value."):
|
||||
for count in _utils._iterate_timeout(
|
||||
1, "test_iterate_timeout_bad_wait", wait="timeishard"):
|
||||
pass
|
||||
|
||||
@mock.patch('time.sleep')
|
||||
def test_iterate_timeout_str_wait(self, mock_sleep):
|
||||
iter = _utils._iterate_timeout(
|
||||
10, "test_iterate_timeout_str_wait", wait="1.6")
|
||||
next(iter)
|
||||
next(iter)
|
||||
mock_sleep.assert_called_with(1.6)
|
||||
|
||||
@mock.patch('time.sleep')
|
||||
def test_iterate_timeout_int_wait(self, mock_sleep):
|
||||
iter = _utils._iterate_timeout(
|
||||
10, "test_iterate_timeout_int_wait", wait=1)
|
||||
next(iter)
|
||||
next(iter)
|
||||
mock_sleep.assert_called_with(1.0)
|
||||
|
||||
@mock.patch('time.sleep')
|
||||
def test_iterate_timeout_timeout(self, mock_sleep):
|
||||
message = "timeout test"
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudTimeout,
|
||||
message):
|
||||
for count in _utils._iterate_timeout(0.1, message, wait=1):
|
||||
pass
|
||||
mock_sleep.assert_called_with(1.0)
|
||||
|
||||
def test__nova_extensions(self):
|
||||
body = [
|
||||
{
|
||||
|
@ -10,10 +10,14 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from keystoneauth1 import plugin as ksa_plugin
|
||||
|
||||
from distutils import version as du_version
|
||||
import mock
|
||||
import testtools
|
||||
|
||||
from openstack.config import cloud_region
|
||||
import os_client_config as occ
|
||||
from os_client_config import cloud_config
|
||||
import shade
|
||||
from shade import exc
|
||||
from shade.tests import fakes
|
||||
@ -72,7 +76,7 @@ class TestShadeOperator(base.RequestsMockTestCase):
|
||||
|
||||
self.assert_calls()
|
||||
|
||||
@mock.patch.object(cloud_region.CloudRegion, 'get_session')
|
||||
@mock.patch.object(cloud_config.CloudConfig, 'get_session')
|
||||
def test_get_session_endpoint_exception(self, get_session_mock):
|
||||
class FakeException(Exception):
|
||||
pass
|
||||
@ -83,14 +87,14 @@ class TestShadeOperator(base.RequestsMockTestCase):
|
||||
session_mock.get_endpoint.side_effect = side_effect
|
||||
get_session_mock.return_value = session_mock
|
||||
self.op_cloud.name = 'testcloud'
|
||||
self.op_cloud.config.region_name = 'testregion'
|
||||
self.op_cloud.region_name = 'testregion'
|
||||
with testtools.ExpectedException(
|
||||
exc.OpenStackCloudException,
|
||||
"Error getting image endpoint on testcloud:testregion:"
|
||||
" No service"):
|
||||
self.op_cloud.get_session_endpoint("image")
|
||||
|
||||
@mock.patch.object(cloud_region.CloudRegion, 'get_session')
|
||||
@mock.patch.object(cloud_config.CloudConfig, 'get_session')
|
||||
def test_get_session_endpoint_unavailable(self, get_session_mock):
|
||||
session_mock = mock.Mock()
|
||||
session_mock.get_endpoint.return_value = None
|
||||
@ -98,25 +102,32 @@ class TestShadeOperator(base.RequestsMockTestCase):
|
||||
image_endpoint = self.op_cloud.get_session_endpoint("image")
|
||||
self.assertIsNone(image_endpoint)
|
||||
|
||||
@mock.patch.object(cloud_region.CloudRegion, 'get_session')
|
||||
@mock.patch.object(cloud_config.CloudConfig, 'get_session')
|
||||
def test_get_session_endpoint_identity(self, get_session_mock):
|
||||
session_mock = mock.Mock()
|
||||
get_session_mock.return_value = session_mock
|
||||
self.op_cloud.get_session_endpoint('identity')
|
||||
kwargs = dict(
|
||||
interface='public', region_name='RegionOne',
|
||||
service_name=None, service_type='identity')
|
||||
# occ > 1.26.0 fixes keystoneclient construction. Unfortunately, it
|
||||
# breaks our mocking of what keystoneclient does here. Since we're
|
||||
# close to just getting rid of ksc anyway, just put in a version match
|
||||
occ_version = du_version.StrictVersion(occ.__version__)
|
||||
if occ_version > du_version.StrictVersion('1.26.0'):
|
||||
kwargs = dict(
|
||||
interface='public', region_name='RegionOne',
|
||||
service_name=None, service_type='identity')
|
||||
else:
|
||||
kwargs = dict(interface=ksa_plugin.AUTH_INTERFACE)
|
||||
|
||||
session_mock.get_endpoint.assert_called_with(**kwargs)
|
||||
|
||||
@mock.patch.object(cloud_region.CloudRegion, 'get_session')
|
||||
@mock.patch.object(cloud_config.CloudConfig, 'get_session')
|
||||
def test_has_service_no(self, get_session_mock):
|
||||
session_mock = mock.Mock()
|
||||
session_mock.get_endpoint.return_value = None
|
||||
get_session_mock.return_value = session_mock
|
||||
self.assertFalse(self.op_cloud.has_service("image"))
|
||||
|
||||
@mock.patch.object(cloud_region.CloudRegion, 'get_session')
|
||||
@mock.patch.object(cloud_config.CloudConfig, 'get_session')
|
||||
def test_has_service_yes(self, get_session_mock):
|
||||
session_mock = mock.Mock()
|
||||
session_mock.get_endpoint.return_value = 'http://fake.url'
|
||||
|
109
shade/tests/unit/test_task_manager.py
Normal file
109
shade/tests/unit/test_task_manager.py
Normal file
@ -0,0 +1,109 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import concurrent.futures
|
||||
import mock
|
||||
|
||||
from shade import task_manager
|
||||
from shade.tests.unit import base
|
||||
|
||||
|
||||
class TestException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class TaskTest(task_manager.Task):
|
||||
def main(self, client):
|
||||
raise TestException("This is a test exception")
|
||||
|
||||
|
||||
class TaskTestGenerator(task_manager.Task):
|
||||
def main(self, client):
|
||||
yield 1
|
||||
|
||||
|
||||
class TaskTestInt(task_manager.Task):
|
||||
def main(self, client):
|
||||
return int(1)
|
||||
|
||||
|
||||
class TaskTestFloat(task_manager.Task):
|
||||
def main(self, client):
|
||||
return float(2.0)
|
||||
|
||||
|
||||
class TaskTestStr(task_manager.Task):
|
||||
def main(self, client):
|
||||
return "test"
|
||||
|
||||
|
||||
class TaskTestBool(task_manager.Task):
|
||||
def main(self, client):
|
||||
return True
|
||||
|
||||
|
||||
class TaskTestSet(task_manager.Task):
|
||||
def main(self, client):
|
||||
return set([1, 2])
|
||||
|
||||
|
||||
class TaskTestAsync(task_manager.Task):
|
||||
def __init__(self):
|
||||
super(task_manager.Task, self).__init__()
|
||||
self.run_async = True
|
||||
|
||||
def main(self, client):
|
||||
pass
|
||||
|
||||
|
||||
class TestTaskManager(base.RequestsMockTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestTaskManager, self).setUp()
|
||||
self.manager = task_manager.TaskManager(name='test', client=self)
|
||||
|
||||
def test_wait_re_raise(self):
|
||||
"""Test that Exceptions thrown in a Task is reraised correctly
|
||||
|
||||
This test is aimed to six.reraise(), called in Task::wait().
|
||||
Specifically, we test if we get the same behaviour with all the
|
||||
configured interpreters (e.g. py27, p34, pypy, ...)
|
||||
"""
|
||||
self.assertRaises(TestException, self.manager.submit_task, TaskTest())
|
||||
|
||||
def test_dont_munchify_int(self):
|
||||
ret = self.manager.submit_task(TaskTestInt())
|
||||
self.assertIsInstance(ret, int)
|
||||
|
||||
def test_dont_munchify_float(self):
|
||||
ret = self.manager.submit_task(TaskTestFloat())
|
||||
self.assertIsInstance(ret, float)
|
||||
|
||||
def test_dont_munchify_str(self):
|
||||
ret = self.manager.submit_task(TaskTestStr())
|
||||
self.assertIsInstance(ret, str)
|
||||
|
||||
def test_dont_munchify_bool(self):
|
||||
ret = self.manager.submit_task(TaskTestBool())
|
||||
self.assertIsInstance(ret, bool)
|
||||
|
||||
def test_dont_munchify_set(self):
|
||||
ret = self.manager.submit_task(TaskTestSet())
|
||||
self.assertIsInstance(ret, set)
|
||||
|
||||
@mock.patch.object(concurrent.futures.ThreadPoolExecutor, 'submit')
|
||||
def test_async(self, mock_submit):
|
||||
self.manager.submit_task(TaskTestAsync())
|
||||
self.assertTrue(mock_submit.called)
|
Loading…
Reference in New Issue
Block a user