Merge "Set status to ERROR if heartbeat expires"
This commit is contained in:
commit
3202b321b3
@ -14,14 +14,16 @@
|
|||||||
- openstack-tox-pylint
|
- openstack-tox-pylint
|
||||||
- trove-tox-bandit-baseline:
|
- trove-tox-bandit-baseline:
|
||||||
voting: false
|
voting: false
|
||||||
- trove-tempest
|
- trove-tempest:
|
||||||
|
voting: false
|
||||||
- trove-tempest-ipv6-only:
|
- trove-tempest-ipv6-only:
|
||||||
voting: false
|
voting: false
|
||||||
gate:
|
gate:
|
||||||
queue: trove
|
queue: trove
|
||||||
jobs:
|
jobs:
|
||||||
- openstack-tox-pylint
|
- openstack-tox-pylint
|
||||||
- trove-tempest
|
- trove-tempest:
|
||||||
|
voting: false
|
||||||
experimental:
|
experimental:
|
||||||
jobs:
|
jobs:
|
||||||
- trove-functional-mysql
|
- trove-functional-mysql
|
||||||
|
@ -17,6 +17,6 @@ handling complex administrative tasks.
|
|||||||
backup-db-incremental.rst
|
backup-db-incremental.rst
|
||||||
manage-db-config.rst
|
manage-db-config.rst
|
||||||
set-up-replication.rst
|
set-up-replication.rst
|
||||||
set-up-clustering.rst
|
|
||||||
upgrade-datastore.rst
|
upgrade-datastore.rst
|
||||||
|
set-up-clustering.rst
|
||||||
upgrade-cluster-datastore.rst
|
upgrade-cluster-datastore.rst
|
||||||
|
@ -2,6 +2,11 @@
|
|||||||
Upgrade cluster datastore
|
Upgrade cluster datastore
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
|
.. caution::
|
||||||
|
|
||||||
|
Database clustering function is still in experimental, should not be used
|
||||||
|
in production environment.
|
||||||
|
|
||||||
Upgrading datastore for cluster instances is very similar to upgrading
|
Upgrading datastore for cluster instances is very similar to upgrading
|
||||||
a single instance.
|
a single instance.
|
||||||
|
|
||||||
|
@ -8,58 +8,71 @@ configuration files of your database.
|
|||||||
|
|
||||||
To perform datastore upgrade, you need:
|
To perform datastore upgrade, you need:
|
||||||
|
|
||||||
|
- A Trove database instance to be upgrade.
|
||||||
- A guest image with the target datastore version.
|
- A guest image with the target datastore version.
|
||||||
|
|
||||||
- A Trove database instance to be upgrade.
|
This guide shows you how to upgrade MySQL datastore from 5.7.29 to 5.7.30 for a
|
||||||
|
database instance.
|
||||||
|
|
||||||
This example shows you how to upgrade Redis datastore (version 3.2.6)
|
.. warning::
|
||||||
for a single instance database.
|
|
||||||
|
|
||||||
.. note::
|
Datastore upgrade could cause downtime of the database service.
|
||||||
|
|
||||||
**Before** upgrading, make sure that:
|
|
||||||
|
|
||||||
- Your target datastore is binary compatible with the current
|
|
||||||
datastore. Each database provider has its own compatibilty
|
|
||||||
policy. Usually there shouldn't be any problem when
|
|
||||||
performing an upgrade within minor versions.
|
|
||||||
|
|
||||||
- You **do not** downgrade your datastore.
|
|
||||||
|
|
||||||
- Target versions is supported by Trove. For instance, Trove
|
|
||||||
doesn't support Cassandra >=2.2 at this moment so you
|
|
||||||
shouldn't perform an upgrade from 2.1 to 2.2.
|
|
||||||
|
|
||||||
Upgrading datastore
|
Upgrading datastore
|
||||||
~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
#. **Check instance status**
|
#. **Check datastore versions in the system**
|
||||||
|
|
||||||
|
In my environment, both datastore version 5.7.29 and 5.7.30 are defined for
|
||||||
|
MySQL.
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack datastore list
|
||||||
|
+--------------------------------------+-------+
|
||||||
|
| ID | Name |
|
||||||
|
+--------------------------------------+-------+
|
||||||
|
| 50bed39d-6788-4a0d-8d74-321012bb6b55 | mysql |
|
||||||
|
+--------------------------------------+-------+
|
||||||
|
$ openstack datastore version list mysql
|
||||||
|
+--------------------------------------+--------+
|
||||||
|
| ID | Name |
|
||||||
|
+--------------------------------------+--------+
|
||||||
|
| 70c68d0a-27e1-4fbd-bd3b-f29d42ce1a7d | 5.7.29 |
|
||||||
|
| cf91aa9a-2192-4ec4-b7ce-5cac3b1e7dbe | 5.7.30 |
|
||||||
|
+--------------------------------------+--------+
|
||||||
|
|
||||||
|
#. **Create a new instance with datastore version 5.7.29**
|
||||||
|
|
||||||
Make sure the instance status is HEALTHY before upgrading.
|
Make sure the instance status is HEALTHY before upgrading.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ openstack database instance create test-mysql-upgrade \
|
||||||
|
d2 \
|
||||||
|
--size 1 \
|
||||||
|
--nic net-id=$netid \
|
||||||
|
--datastore mysql --datastore_version 5.7.29 \
|
||||||
|
--databases testdb --users user:password
|
||||||
$ openstack database instance list
|
$ openstack database instance list
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region |
|
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region | Role |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| 55411e95-1670-497f-8d92-0179f3b4fdd4 | redis_test | redis | 3.2.6 | HEALTHY | 10.1.0.25 | 6 | 1 | RegionOne |
|
| 32eb56b0-d10d-43e9-b59e-1e4b0979e5dd | test-mysql-upgrade | mysql | 5.7.29 | HEALTHY | [{'address': '10.0.0.54', 'type': 'private'}] | d2 | 1 | RegionOne | |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
|
|
||||||
#. **Check if target version is available**
|
Check the MySQL version by connecting with the database:
|
||||||
|
|
||||||
Use :command:`openstack datastore version list` command to list
|
|
||||||
all available versions your datastore.
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack datastore version list redis
|
$ ip=10.0.0.54
|
||||||
+--------------------------------------+-------+
|
$ mysql -u user -ppassword -h $ip testdb
|
||||||
| ID | Name |
|
mysql> SELECT @@GLOBAL.innodb_version;
|
||||||
+--------------------------------------+-------+
|
+-------------------------+
|
||||||
| 483debec-b7c3-4167-ab1d-1765795ed7eb | 3.2.6 |
|
| @@GLOBAL.innodb_version |
|
||||||
| 507f666e-193c-4194-9d9d-da8342dcb4f1 | 3.2.7 |
|
+-------------------------+
|
||||||
+--------------------------------------+-------+
|
| 5.7.29 |
|
||||||
|
+-------------------------+
|
||||||
|
|
||||||
#. **Run upgrade**
|
#. **Run upgrade**
|
||||||
|
|
||||||
@ -68,7 +81,7 @@ Upgrading datastore
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack database instance upgrade 55411e95-1670-497f-8d92-0179f3b4fdd4 3.2.7
|
$ openstack database instance upgrade 32eb56b0-d10d-43e9-b59e-1e4b0979e5dd cf91aa9a-2192-4ec4-b7ce-5cac3b1e7dbe
|
||||||
|
|
||||||
#. **Wait until status changes from UPGRADE to HEALTHY**
|
#. **Wait until status changes from UPGRADE to HEALTHY**
|
||||||
|
|
||||||
@ -78,24 +91,26 @@ Upgrading datastore
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ openstack database instance list
|
$ openstack database instance list
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region |
|
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region | Role |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| 55411e95-1670-497f-8d92-0179f3b4fdd4 | redis_test | redis | 3.2.7 | UPGRADE | 10.1.0.25 | 6 | 5 | RegionOne |
|
| 32eb56b0-d10d-43e9-b59e-1e4b0979e5dd | test-mysql-upgrade | mysql | 5.7.30 | UPGRADE | [{'address': '10.0.0.54', 'type': 'private'}] | d2 | 1 | RegionOne | |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
$ openstack database instance list
|
$ openstack database instance list
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region |
|
| ID | Name | Datastore | Datastore Version | Status | Addresses | Flavor ID | Size | Region | Role |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
| 55411e95-1670-497f-8d92-0179f3b4fdd4 | redis_test | redis | 3.2.7 | HEALTHY | 10.1.0.25 | 6 | 5 | RegionOne |
|
| 32eb56b0-d10d-43e9-b59e-1e4b0979e5dd | test-mysql-upgrade | mysql | 5.7.30 | HEALTHY | [{'address': '10.0.0.54', 'type': 'private'}] | d2 | 1 | RegionOne | |
|
||||||
+--------------------------------------+------------+-----------+-------------------+---------+-----------+-----------+------+-----------+
|
+--------------------------------------+--------------------+-----------+-------------------+---------+-----------------------------------------------+-----------+------+-----------+---------+
|
||||||
|
|
||||||
Other datastores
|
Check the MySQL version again:
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Upgrade for other datastores works in the same way. Currently Trove
|
.. code-block:: console
|
||||||
supports upgrades for the following datastores:
|
|
||||||
|
|
||||||
- MySQL
|
$ mysql -u user -ppassword -h $ip testdb
|
||||||
- MariaDB
|
mysql> SELECT @@GLOBAL.innodb_version;
|
||||||
- Redis
|
+-------------------------+
|
||||||
|
| @@GLOBAL.innodb_version |
|
||||||
|
+-------------------------+
|
||||||
|
| 5.7.30 |
|
||||||
|
+-------------------------+
|
||||||
|
4
releasenotes/notes/victoria-expired-database-status.yaml
Normal file
4
releasenotes/notes/victoria-expired-database-status.yaml
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
---
|
||||||
|
fixes:
|
||||||
|
- When the trove-guestagent failed to update the datastore service status,
|
||||||
|
the instance status should be ERROR.
|
@ -1239,6 +1239,12 @@
|
|||||||
"Instance of 'DBInstance' has no 'encrypted_key' member",
|
"Instance of 'DBInstance' has no 'encrypted_key' member",
|
||||||
"DBInstance.key"
|
"DBInstance.key"
|
||||||
],
|
],
|
||||||
|
[
|
||||||
|
"trove/instance/models.py",
|
||||||
|
"E1101",
|
||||||
|
"Instance of 'InstanceServiceStatus' has no 'updated_at' member",
|
||||||
|
"InstanceServiceStatus.is_uptodate"
|
||||||
|
],
|
||||||
[
|
[
|
||||||
"trove/instance/models.py",
|
"trove/instance/models.py",
|
||||||
"no-member",
|
"no-member",
|
||||||
@ -1323,6 +1329,12 @@
|
|||||||
"Instance of 'DBInstance' has no 'encrypted_key' member",
|
"Instance of 'DBInstance' has no 'encrypted_key' member",
|
||||||
"DBInstance.key"
|
"DBInstance.key"
|
||||||
],
|
],
|
||||||
|
[
|
||||||
|
"trove/instance/models.py",
|
||||||
|
"no-member",
|
||||||
|
"Instance of 'InstanceServiceStatus' has no 'updated_at' member",
|
||||||
|
"InstanceServiceStatus.is_uptodate"
|
||||||
|
],
|
||||||
[
|
[
|
||||||
"trove/instance/service.py",
|
"trove/instance/service.py",
|
||||||
"E1101",
|
"E1101",
|
||||||
|
@ -17,17 +17,16 @@ from eventlet.timeout import Timeout
|
|||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
|
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common.exception import PollTimeOut
|
|
||||||
from trove.common.instance import ServiceStatuses
|
|
||||||
from trove.common.strategies.cluster import base
|
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
|
from trove.common.exception import PollTimeOut
|
||||||
|
from trove.common.strategies.cluster import base
|
||||||
from trove.instance import models
|
from trove.instance import models
|
||||||
|
from trove.instance import tasks as inst_tasks
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import Instance
|
from trove.instance.models import Instance
|
||||||
from trove.instance import tasks as inst_tasks
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.taskmanager import api as task_api
|
from trove.taskmanager import api as task_api
|
||||||
import trove.taskmanager.models as task_models
|
from trove.taskmanager import models as task_models
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
@ -19,12 +19,12 @@ from oslo_service import periodic_task
|
|||||||
from trove.backup import models as bkup_models
|
from trove.backup import models as bkup_models
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import exception as trove_exception
|
from trove.common import exception as trove_exception
|
||||||
from trove.common.instance import ServiceStatus
|
|
||||||
from trove.common.rpc import version as rpc_version
|
from trove.common.rpc import version as rpc_version
|
||||||
from trove.common.serializable_notification import SerializableNotification
|
from trove.common.serializable_notification import SerializableNotification
|
||||||
from trove.conductor.models import LastSeen
|
from trove.conductor.models import LastSeen
|
||||||
from trove.extensions.mysql import models as mysql_models
|
from trove.extensions.mysql import models as mysql_models
|
||||||
from trove.instance import models as inst_models
|
from trove.instance import models as inst_models
|
||||||
|
from trove.instance.service_status import ServiceStatus
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
@ -89,8 +89,8 @@ class Manager(periodic_task.PeriodicTasks):
|
|||||||
if self._message_too_old(instance_id, 'heartbeat', sent):
|
if self._message_too_old(instance_id, 'heartbeat', sent):
|
||||||
return
|
return
|
||||||
if payload.get('service_status') is not None:
|
if payload.get('service_status') is not None:
|
||||||
status.set_status(ServiceStatus.from_description(
|
status.set_status(
|
||||||
payload['service_status']))
|
ServiceStatus.from_description(payload['service_status']))
|
||||||
status.save()
|
status.save()
|
||||||
|
|
||||||
def update_backup(self, context, instance_id, backup_id,
|
def update_backup(self, context, instance_id, backup_id,
|
||||||
|
@ -380,6 +380,14 @@ class API(object):
|
|||||||
self.agent_high_timeout, version=version,
|
self.agent_high_timeout, version=version,
|
||||||
upgrade_info=upgrade_info)
|
upgrade_info=upgrade_info)
|
||||||
|
|
||||||
|
def upgrade(self, upgrade_info):
|
||||||
|
"""Upgrade database service."""
|
||||||
|
LOG.debug("Sending the call to upgrade database service.")
|
||||||
|
version = self.API_BASE_VERSION
|
||||||
|
|
||||||
|
return self._cast("upgrade", version=version,
|
||||||
|
upgrade_info=upgrade_info)
|
||||||
|
|
||||||
def restart(self):
|
def restart(self):
|
||||||
"""Restart the database server."""
|
"""Restart the database server."""
|
||||||
LOG.debug("Sending the call to restart the database process "
|
LOG.debug("Sending the call to restart the database process "
|
||||||
@ -419,16 +427,6 @@ class API(object):
|
|||||||
self._call("stop_db", self.agent_low_timeout,
|
self._call("stop_db", self.agent_low_timeout,
|
||||||
version=version)
|
version=version)
|
||||||
|
|
||||||
def upgrade(self, instance_version, location, metadata=None):
|
|
||||||
"""Make an asynchronous call to self upgrade the guest agent."""
|
|
||||||
LOG.debug("Sending an upgrade call to nova-guest.")
|
|
||||||
version = self.API_BASE_VERSION
|
|
||||||
|
|
||||||
self._cast("upgrade", version=version,
|
|
||||||
instance_version=instance_version,
|
|
||||||
location=location,
|
|
||||||
metadata=metadata)
|
|
||||||
|
|
||||||
def get_volume_info(self):
|
def get_volume_info(self):
|
||||||
"""Make a synchronous call to get volume info for the container."""
|
"""Make a synchronous call to get volume info for the container."""
|
||||||
LOG.debug("Check Volume Info on instance %s.", self.id)
|
LOG.debug("Check Volume Info on instance %s.", self.id)
|
||||||
|
@ -25,7 +25,6 @@ from oslo_service import periodic_task
|
|||||||
|
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common import instance
|
|
||||||
from trove.common.i18n import _
|
from trove.common.i18n import _
|
||||||
from trove.common.notification import EndNotification
|
from trove.common.notification import EndNotification
|
||||||
from trove.guestagent import dbaas
|
from trove.guestagent import dbaas
|
||||||
@ -37,6 +36,7 @@ from trove.guestagent.common.operating_system import FileMode
|
|||||||
from trove.guestagent.module import driver_manager
|
from trove.guestagent.module import driver_manager
|
||||||
from trove.guestagent.module import module_manager
|
from trove.guestagent.module import module_manager
|
||||||
from trove.guestagent.strategies import replication as repl_strategy
|
from trove.guestagent.strategies import replication as repl_strategy
|
||||||
|
from trove.instance import service_status
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
@ -306,6 +306,10 @@ class Manager(periodic_task.PeriodicTasks):
|
|||||||
"""
|
"""
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
def upgrade(self, context, upgrade_info):
|
||||||
|
"""Upgrade the database."""
|
||||||
|
pass
|
||||||
|
|
||||||
def post_upgrade(self, context, upgrade_info):
|
def post_upgrade(self, context, upgrade_info):
|
||||||
"""Recovers the guest after the image is upgraded using information
|
"""Recovers the guest after the image is upgraded using information
|
||||||
from the pre_upgrade step
|
from the pre_upgrade step
|
||||||
@ -588,7 +592,8 @@ class Manager(periodic_task.PeriodicTasks):
|
|||||||
self.configuration_manager.apply_system_override(
|
self.configuration_manager.apply_system_override(
|
||||||
config_man_values, change_id=apply_label, pre_user=True)
|
config_man_values, change_id=apply_label, pre_user=True)
|
||||||
if restart_required:
|
if restart_required:
|
||||||
self.status.set_status(instance.ServiceStatuses.RESTART_REQUIRED)
|
self.status.set_status(
|
||||||
|
service_status.ServiceStatuses.RESTART_REQUIRED)
|
||||||
else:
|
else:
|
||||||
self.apply_overrides(context, cfg_values)
|
self.apply_overrides(context, cfg_values)
|
||||||
|
|
||||||
|
@ -22,7 +22,6 @@ from oslo_log import log as logging
|
|||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import configurations
|
from trove.common import configurations
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common import instance as rd_instance
|
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
from trove.common.notification import EndNotification
|
from trove.common.notification import EndNotification
|
||||||
from trove.guestagent import guest_log
|
from trove.guestagent import guest_log
|
||||||
@ -32,6 +31,7 @@ from trove.guestagent.datastore import manager
|
|||||||
from trove.guestagent.strategies import replication as repl_strategy
|
from trove.guestagent.strategies import replication as repl_strategy
|
||||||
from trove.guestagent.utils import docker as docker_util
|
from trove.guestagent.utils import docker as docker_util
|
||||||
from trove.guestagent.utils import mysql as mysql_util
|
from trove.guestagent.utils import mysql as mysql_util
|
||||||
|
from trove.instance import service_status
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
@ -71,7 +71,7 @@ class MySqlManager(manager.Manager):
|
|||||||
client.execute(cmd)
|
client.execute(cmd)
|
||||||
|
|
||||||
LOG.debug("Database service check: database query is responsive")
|
LOG.debug("Database service check: database query is responsive")
|
||||||
return rd_instance.ServiceStatuses.HEALTHY
|
return service_status.ServiceStatuses.HEALTHY
|
||||||
except Exception:
|
except Exception:
|
||||||
return super(MySqlManager, self).get_service_status()
|
return super(MySqlManager, self).get_service_status()
|
||||||
|
|
||||||
@ -295,7 +295,7 @@ class MySqlManager(manager.Manager):
|
|||||||
self.app.restore_backup(context, backup_info, restore_location)
|
self.app.restore_backup(context, backup_info, restore_location)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.error("Failed to restore from backup %s.", backup_info['id'])
|
LOG.error("Failed to restore from backup %s.", backup_info['id'])
|
||||||
self.status.set_status(rd_instance.ServiceStatuses.FAILED)
|
self.status.set_status(service_status.ServiceStatuses.FAILED)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
LOG.info("Finished restore data from backup %s", backup_info['id'])
|
LOG.info("Finished restore data from backup %s", backup_info['id'])
|
||||||
@ -365,7 +365,7 @@ class MySqlManager(manager.Manager):
|
|||||||
slave_config)
|
slave_config)
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
LOG.error("Error enabling replication, error: %s", str(err))
|
LOG.error("Error enabling replication, error: %s", str(err))
|
||||||
self.status.set_status(rd_instance.ServiceStatuses.FAILED)
|
self.status.set_status(service_status.ServiceStatuses.FAILED)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def detach_replica(self, context, for_failover=False):
|
def detach_replica(self, context, for_failover=False):
|
||||||
@ -431,3 +431,9 @@ class MySqlManager(manager.Manager):
|
|||||||
def demote_replication_master(self, context):
|
def demote_replication_master(self, context):
|
||||||
LOG.info("Demoting replication master.")
|
LOG.info("Demoting replication master.")
|
||||||
self.replication.demote_master(self.app)
|
self.replication.demote_master(self.app)
|
||||||
|
|
||||||
|
def upgrade(self, context, upgrade_info):
|
||||||
|
"""Upgrade the database."""
|
||||||
|
LOG.info('Starting to upgrade database, upgrade_info: %s',
|
||||||
|
upgrade_info)
|
||||||
|
self.app.upgrade(upgrade_info)
|
||||||
|
@ -27,7 +27,6 @@ from sqlalchemy.sql.expression import text
|
|||||||
from trove.backup.state import BackupState
|
from trove.backup.state import BackupState
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common import instance
|
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
from trove.common.configurations import MySQLConfParser
|
from trove.common.configurations import MySQLConfParser
|
||||||
from trove.common.db.mysql import models
|
from trove.common.db.mysql import models
|
||||||
@ -43,6 +42,7 @@ from trove.guestagent.datastore import service
|
|||||||
from trove.guestagent.datastore.mysql_common import service as commmon_service
|
from trove.guestagent.datastore.mysql_common import service as commmon_service
|
||||||
from trove.guestagent.utils import docker as docker_util
|
from trove.guestagent.utils import docker as docker_util
|
||||||
from trove.guestagent.utils import mysql as mysql_util
|
from trove.guestagent.utils import mysql as mysql_util
|
||||||
|
from trove.instance import service_status
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
@ -77,24 +77,24 @@ class BaseMySqlAppStatus(service.BaseDbStatus):
|
|||||||
cmd = 'mysql -uroot -p%s -e "select 1;"' % root_pass
|
cmd = 'mysql -uroot -p%s -e "select 1;"' % root_pass
|
||||||
try:
|
try:
|
||||||
docker_util.run_command(self.docker_client, cmd)
|
docker_util.run_command(self.docker_client, cmd)
|
||||||
return instance.ServiceStatuses.HEALTHY
|
return service_status.ServiceStatuses.HEALTHY
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
LOG.warning('Failed to run docker command, error: %s',
|
LOG.warning('Failed to run docker command, error: %s',
|
||||||
str(exc))
|
str(exc))
|
||||||
container_log = docker_util.get_container_logs(
|
container_log = docker_util.get_container_logs(
|
||||||
self.docker_client, tail='all')
|
self.docker_client, tail='all')
|
||||||
LOG.warning('container log: %s', '\n'.join(container_log))
|
LOG.debug('container log: \n%s', '\n'.join(container_log))
|
||||||
return instance.ServiceStatuses.RUNNING
|
return service_status.ServiceStatuses.RUNNING
|
||||||
elif status == "not running":
|
elif status == "not running":
|
||||||
return instance.ServiceStatuses.SHUTDOWN
|
return service_status.ServiceStatuses.SHUTDOWN
|
||||||
elif status == "paused":
|
elif status == "paused":
|
||||||
return instance.ServiceStatuses.PAUSED
|
return service_status.ServiceStatuses.PAUSED
|
||||||
elif status == "exited":
|
elif status == "exited":
|
||||||
return instance.ServiceStatuses.SHUTDOWN
|
return service_status.ServiceStatuses.SHUTDOWN
|
||||||
elif status == "dead":
|
elif status == "dead":
|
||||||
return instance.ServiceStatuses.CRASHED
|
return service_status.ServiceStatuses.CRASHED
|
||||||
else:
|
else:
|
||||||
return instance.ServiceStatuses.UNKNOWN
|
return service_status.ServiceStatuses.UNKNOWN
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
@six.add_metaclass(abc.ABCMeta)
|
||||||
@ -638,8 +638,9 @@ class BaseMySqlApp(object):
|
|||||||
raise exception.TroveError(_("Failed to start mysql"))
|
raise exception.TroveError(_("Failed to start mysql"))
|
||||||
|
|
||||||
if not self.status.wait_for_real_status_to_change_to(
|
if not self.status.wait_for_real_status_to_change_to(
|
||||||
instance.ServiceStatuses.HEALTHY,
|
service_status.ServiceStatuses.HEALTHY,
|
||||||
CONF.state_change_wait_time, update_db):
|
CONF.state_change_wait_time, update_db
|
||||||
|
):
|
||||||
raise exception.TroveError(_("Failed to start mysql"))
|
raise exception.TroveError(_("Failed to start mysql"))
|
||||||
|
|
||||||
def start_db_with_conf_changes(self, config_contents):
|
def start_db_with_conf_changes(self, config_contents):
|
||||||
@ -662,7 +663,7 @@ class BaseMySqlApp(object):
|
|||||||
raise exception.TroveError("Failed to stop mysql")
|
raise exception.TroveError("Failed to stop mysql")
|
||||||
|
|
||||||
if not self.status.wait_for_real_status_to_change_to(
|
if not self.status.wait_for_real_status_to_change_to(
|
||||||
instance.ServiceStatuses.SHUTDOWN,
|
service_status.ServiceStatuses.SHUTDOWN,
|
||||||
CONF.state_change_wait_time, update_db):
|
CONF.state_change_wait_time, update_db):
|
||||||
raise exception.TroveError("Failed to stop mysql")
|
raise exception.TroveError("Failed to stop mysql")
|
||||||
|
|
||||||
@ -714,7 +715,7 @@ class BaseMySqlApp(object):
|
|||||||
raise exception.TroveError("Failed to restart mysql")
|
raise exception.TroveError("Failed to restart mysql")
|
||||||
|
|
||||||
if not self.status.wait_for_real_status_to_change_to(
|
if not self.status.wait_for_real_status_to_change_to(
|
||||||
instance.ServiceStatuses.HEALTHY,
|
service_status.ServiceStatuses.HEALTHY,
|
||||||
CONF.state_change_wait_time, update_db=False):
|
CONF.state_change_wait_time, update_db=False):
|
||||||
raise exception.TroveError("Failed to start mysql")
|
raise exception.TroveError("Failed to start mysql")
|
||||||
|
|
||||||
@ -949,6 +950,20 @@ class BaseMySqlApp(object):
|
|||||||
q = "set global read_only = %s" % read_only
|
q = "set global read_only = %s" % read_only
|
||||||
client.execute(text(str(q)))
|
client.execute(text(str(q)))
|
||||||
|
|
||||||
|
def upgrade(self, upgrade_info):
|
||||||
|
"""Upgrade the database."""
|
||||||
|
new_version = upgrade_info.get('datastore_version')
|
||||||
|
|
||||||
|
LOG.info('Stopping db container for upgrade')
|
||||||
|
self.stop_db()
|
||||||
|
|
||||||
|
LOG.info('Deleting db container for upgrade')
|
||||||
|
docker_util.remove_container(self.docker_client)
|
||||||
|
|
||||||
|
LOG.info('Starting new db container with version %s for upgrade',
|
||||||
|
new_version)
|
||||||
|
self.start_db(update_db=True, ds_version=new_version)
|
||||||
|
|
||||||
|
|
||||||
class BaseMySqlRootAccess(object):
|
class BaseMySqlRootAccess(object):
|
||||||
def __init__(self, mysql_app):
|
def __init__(self, mysql_app):
|
||||||
|
@ -20,11 +20,11 @@ from oslo_utils import timeutils
|
|||||||
|
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import context as trove_context
|
from trove.common import context as trove_context
|
||||||
from trove.common import instance
|
|
||||||
from trove.common.i18n import _
|
from trove.common.i18n import _
|
||||||
from trove.conductor import api as conductor_api
|
from trove.conductor import api as conductor_api
|
||||||
from trove.guestagent.common import guestagent_utils
|
from trove.guestagent.common import guestagent_utils
|
||||||
from trove.guestagent.common import operating_system
|
from trove.guestagent.common import operating_system
|
||||||
|
from trove.instance import service_status
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
@ -74,7 +74,7 @@ class BaseDbStatus(object):
|
|||||||
operating_system.write_file(prepare_start_file, '')
|
operating_system.write_file(prepare_start_file, '')
|
||||||
self.__refresh_prepare_completed()
|
self.__refresh_prepare_completed()
|
||||||
|
|
||||||
self.set_status(instance.ServiceStatuses.BUILDING, True)
|
self.set_status(service_status.ServiceStatuses.BUILDING, True)
|
||||||
|
|
||||||
def set_ready(self):
|
def set_ready(self):
|
||||||
prepare_end_file = guestagent_utils.build_file_path(
|
prepare_end_file = guestagent_utils.build_file_path(
|
||||||
@ -92,9 +92,9 @@ class BaseDbStatus(object):
|
|||||||
|
|
||||||
final_status = None
|
final_status = None
|
||||||
if error_occurred:
|
if error_occurred:
|
||||||
final_status = instance.ServiceStatuses.FAILED
|
final_status = service_status.ServiceStatuses.FAILED
|
||||||
elif post_processing:
|
elif post_processing:
|
||||||
final_status = instance.ServiceStatuses.INSTANCE_READY
|
final_status = service_status.ServiceStatuses.INSTANCE_READY
|
||||||
|
|
||||||
if final_status:
|
if final_status:
|
||||||
LOG.info("Set final status to %s.", final_status)
|
LOG.info("Set final status to %s.", final_status)
|
||||||
@ -126,8 +126,8 @@ class BaseDbStatus(object):
|
|||||||
def is_running(self):
|
def is_running(self):
|
||||||
"""True if DB server is running."""
|
"""True if DB server is running."""
|
||||||
return (self.status is not None and
|
return (self.status is not None and
|
||||||
self.status in [instance.ServiceStatuses.RUNNING,
|
self.status in [service_status.ServiceStatuses.RUNNING,
|
||||||
instance.ServiceStatuses.HEALTHY])
|
service_status.ServiceStatuses.HEALTHY])
|
||||||
|
|
||||||
def set_status(self, status, force=False):
|
def set_status(self, status, force=False):
|
||||||
"""Use conductor to update the DB app status."""
|
"""Use conductor to update the DB app status."""
|
||||||
@ -199,7 +199,7 @@ class BaseDbStatus(object):
|
|||||||
"""
|
"""
|
||||||
LOG.debug("Waiting for database to start up.")
|
LOG.debug("Waiting for database to start up.")
|
||||||
if not self._wait_for_database_service_status(
|
if not self._wait_for_database_service_status(
|
||||||
instance.ServiceStatuses.RUNNING, timeout, update_db):
|
service_status.ServiceStatuses.RUNNING, timeout, update_db):
|
||||||
raise RuntimeError(_("Database failed to start."))
|
raise RuntimeError(_("Database failed to start."))
|
||||||
|
|
||||||
LOG.info("Database has started successfully.")
|
LOG.info("Database has started successfully.")
|
||||||
@ -229,7 +229,7 @@ class BaseDbStatus(object):
|
|||||||
|
|
||||||
LOG.debug("Waiting for database to shutdown.")
|
LOG.debug("Waiting for database to shutdown.")
|
||||||
if not self._wait_for_database_service_status(
|
if not self._wait_for_database_service_status(
|
||||||
instance.ServiceStatuses.SHUTDOWN, timeout, update_db):
|
service_status.ServiceStatuses.SHUTDOWN, timeout, update_db):
|
||||||
raise RuntimeError(_("Database failed to stop."))
|
raise RuntimeError(_("Database failed to stop."))
|
||||||
|
|
||||||
LOG.info("Database has stopped successfully.")
|
LOG.info("Database has stopped successfully.")
|
||||||
@ -283,9 +283,19 @@ class BaseDbStatus(object):
|
|||||||
# outside.
|
# outside.
|
||||||
loop = True
|
loop = True
|
||||||
|
|
||||||
|
# We need 3 (by default) consecutive success db connections for status
|
||||||
|
# 'HEALTHY'
|
||||||
|
healthy_count = 0
|
||||||
|
|
||||||
while loop:
|
while loop:
|
||||||
self.status = self.get_actual_db_status()
|
self.status = self.get_actual_db_status()
|
||||||
if self.status == status:
|
if self.status == status:
|
||||||
|
if (status == service_status.ServiceStatuses.HEALTHY and
|
||||||
|
healthy_count < 2):
|
||||||
|
healthy_count += 1
|
||||||
|
time.sleep(CONF.state_change_poll_time)
|
||||||
|
continue
|
||||||
|
|
||||||
if update_db:
|
if update_db:
|
||||||
self.set_status(self.status)
|
self.set_status(self.status)
|
||||||
return True
|
return True
|
||||||
|
@ -19,13 +19,13 @@ from datetime import datetime
|
|||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
import os.path
|
import os.path
|
||||||
import re
|
import re
|
||||||
import six
|
|
||||||
|
|
||||||
from novaclient import exceptions as nova_exceptions
|
from novaclient import exceptions as nova_exceptions
|
||||||
from oslo_config.cfg import NoSuchOptError
|
from oslo_config.cfg import NoSuchOptError
|
||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
from oslo_utils import encodeutils
|
from oslo_utils import encodeutils
|
||||||
from oslo_utils import netutils
|
from oslo_utils import netutils
|
||||||
|
import six
|
||||||
from sqlalchemy import func
|
from sqlalchemy import func
|
||||||
|
|
||||||
from trove.backup.models import Backup
|
from trove.backup.models import Backup
|
||||||
@ -33,15 +33,14 @@ from trove.common import cfg
|
|||||||
from trove.common import clients
|
from trove.common import clients
|
||||||
from trove.common import crypto_utils as cu
|
from trove.common import crypto_utils as cu
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common.i18n import _
|
|
||||||
from trove.common import instance as tr_instance
|
|
||||||
from trove.common import neutron
|
from trove.common import neutron
|
||||||
from trove.common import notification
|
from trove.common import notification
|
||||||
from trove.common import server_group as srv_grp
|
from trove.common import server_group as srv_grp
|
||||||
from trove.common import template
|
from trove.common import template
|
||||||
from trove.common import timeutils
|
from trove.common import timeutils
|
||||||
from trove.common.trove_remote import create_trove_client
|
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
|
from trove.common.i18n import _
|
||||||
|
from trove.common.trove_remote import create_trove_client
|
||||||
from trove.configuration.models import Configuration
|
from trove.configuration.models import Configuration
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
from trove.datastore.models import DatastoreVersionMetadata as dvm
|
from trove.datastore.models import DatastoreVersionMetadata as dvm
|
||||||
@ -49,6 +48,7 @@ from trove.datastore.models import DBDatastoreVersionMetadata
|
|||||||
from trove.db import get_db_api
|
from trove.db import get_db_api
|
||||||
from trove.db import models as dbmodels
|
from trove.db import models as dbmodels
|
||||||
from trove.extensions.security_group.models import SecurityGroup
|
from trove.extensions.security_group.models import SecurityGroup
|
||||||
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.instance.tasks import InstanceTask
|
from trove.instance.tasks import InstanceTask
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove.module import models as module_models
|
from trove.module import models as module_models
|
||||||
@ -339,25 +339,25 @@ class SimpleInstance(object):
|
|||||||
action = self.db_info.task_status.action
|
action = self.db_info.task_status.action
|
||||||
|
|
||||||
# Check if we are resetting status or force deleting
|
# Check if we are resetting status or force deleting
|
||||||
if (tr_instance.ServiceStatuses.UNKNOWN == self.datastore_status.status
|
if (srvstatus.ServiceStatuses.UNKNOWN == self.datastore_status.status
|
||||||
and action == InstanceTasks.DELETING.action):
|
and action == InstanceTasks.DELETING.action):
|
||||||
return InstanceStatus.SHUTDOWN
|
return InstanceStatus.SHUTDOWN
|
||||||
elif (tr_instance.ServiceStatuses.UNKNOWN ==
|
elif (srvstatus.ServiceStatuses.UNKNOWN ==
|
||||||
self.datastore_status.status):
|
self.datastore_status.status):
|
||||||
return InstanceStatus.ERROR
|
return InstanceStatus.ERROR
|
||||||
|
|
||||||
# Check for taskmanager status.
|
# Check for taskmanager status.
|
||||||
if 'BUILDING' == action:
|
if InstanceTasks.BUILDING.action == action:
|
||||||
if 'ERROR' == self.db_info.server_status:
|
if 'ERROR' == self.db_info.server_status:
|
||||||
return InstanceStatus.ERROR
|
return InstanceStatus.ERROR
|
||||||
return InstanceStatus.BUILD
|
return InstanceStatus.BUILD
|
||||||
if 'REBOOTING' == action:
|
if InstanceTasks.REBOOTING.action == action:
|
||||||
return InstanceStatus.REBOOT
|
return InstanceStatus.REBOOT
|
||||||
if 'RESIZING' == action:
|
if InstanceTasks.RESIZING.action == action:
|
||||||
return InstanceStatus.RESIZE
|
return InstanceStatus.RESIZE
|
||||||
if 'UPGRADING' == action:
|
if InstanceTasks.UPGRADING.action == action:
|
||||||
return InstanceStatus.UPGRADE
|
return InstanceStatus.UPGRADE
|
||||||
if 'RESTART_REQUIRED' == action:
|
if InstanceTasks.RESTART_REQUIRED.action == action:
|
||||||
return InstanceStatus.RESTART_REQUIRED
|
return InstanceStatus.RESTART_REQUIRED
|
||||||
if InstanceTasks.PROMOTING.action == action:
|
if InstanceTasks.PROMOTING.action == action:
|
||||||
return InstanceStatus.PROMOTE
|
return InstanceStatus.PROMOTE
|
||||||
@ -396,10 +396,10 @@ class SimpleInstance(object):
|
|||||||
|
|
||||||
# Check against the service status.
|
# Check against the service status.
|
||||||
# The service is only paused during a reboot.
|
# The service is only paused during a reboot.
|
||||||
if tr_instance.ServiceStatuses.PAUSED == self.datastore_status.status:
|
if srvstatus.ServiceStatuses.PAUSED == self.datastore_status.status:
|
||||||
return InstanceStatus.REBOOT
|
return InstanceStatus.REBOOT
|
||||||
# If the service status is NEW, then we are building.
|
# If the service status is NEW, then we are building.
|
||||||
if tr_instance.ServiceStatuses.NEW == self.datastore_status.status:
|
if srvstatus.ServiceStatuses.NEW == self.datastore_status.status:
|
||||||
return InstanceStatus.BUILD
|
return InstanceStatus.BUILD
|
||||||
|
|
||||||
# For everything else we can look at the service status mapping.
|
# For everything else we can look at the service status mapping.
|
||||||
@ -594,14 +594,19 @@ def load_instance(cls, context, id, needs_server=False,
|
|||||||
|
|
||||||
def load_instance_with_info(cls, context, id, cluster_id=None):
|
def load_instance_with_info(cls, context, id, cluster_id=None):
|
||||||
db_info = get_db_info(context, id, cluster_id)
|
db_info = get_db_info(context, id, cluster_id)
|
||||||
|
LOG.debug('Task status for instance %s: %s', id, db_info.task_status)
|
||||||
|
|
||||||
|
service_status = InstanceServiceStatus.find_by(instance_id=id)
|
||||||
|
if (db_info.task_status == InstanceTasks.NONE and
|
||||||
|
not service_status.is_uptodate()):
|
||||||
|
LOG.warning('Guest agent heartbeat for instance %s has expried', id)
|
||||||
|
service_status.status = \
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
|
|
||||||
load_simple_instance_server_status(context, db_info)
|
load_simple_instance_server_status(context, db_info)
|
||||||
|
|
||||||
load_simple_instance_addresses(context, db_info)
|
load_simple_instance_addresses(context, db_info)
|
||||||
|
|
||||||
service_status = InstanceServiceStatus.find_by(instance_id=id)
|
|
||||||
LOG.debug("Instance %(instance_id)s service status is %(service_status)s.",
|
|
||||||
{'instance_id': id, 'service_status': service_status.status})
|
|
||||||
instance = cls(context, db_info, service_status)
|
instance = cls(context, db_info, service_status)
|
||||||
|
|
||||||
load_guest_info(instance, context, id)
|
load_guest_info(instance, context, id)
|
||||||
@ -879,7 +884,7 @@ class BaseInstance(SimpleInstance):
|
|||||||
|
|
||||||
def set_servicestatus_deleted(self):
|
def set_servicestatus_deleted(self):
|
||||||
del_instance = InstanceServiceStatus.find_by(instance_id=self.id)
|
del_instance = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
del_instance.set_status(tr_instance.ServiceStatuses.DELETED)
|
del_instance.set_status(srvstatus.ServiceStatuses.DELETED)
|
||||||
del_instance.save()
|
del_instance.save()
|
||||||
|
|
||||||
def set_instance_fault_deleted(self):
|
def set_instance_fault_deleted(self):
|
||||||
@ -956,7 +961,12 @@ class BaseInstance(SimpleInstance):
|
|||||||
self.reset_task_status()
|
self.reset_task_status()
|
||||||
|
|
||||||
reset_instance = InstanceServiceStatus.find_by(instance_id=self.id)
|
reset_instance = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
reset_instance.set_status(tr_instance.ServiceStatuses.UNKNOWN)
|
reset_instance.set_status(srvstatus.ServiceStatuses.UNKNOWN)
|
||||||
|
reset_instance.save()
|
||||||
|
|
||||||
|
def set_service_status(self, status):
|
||||||
|
reset_instance = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
|
reset_instance.set_status(status)
|
||||||
reset_instance.save()
|
reset_instance.save()
|
||||||
|
|
||||||
|
|
||||||
@ -1267,7 +1277,7 @@ class Instance(BuiltInstance):
|
|||||||
overrides = config.get_configuration_overrides()
|
overrides = config.get_configuration_overrides()
|
||||||
service_status = InstanceServiceStatus.create(
|
service_status = InstanceServiceStatus.create(
|
||||||
instance_id=instance_id,
|
instance_id=instance_id,
|
||||||
status=tr_instance.ServiceStatuses.NEW)
|
status=srvstatus.ServiceStatuses.NEW)
|
||||||
|
|
||||||
if CONF.trove_dns_support:
|
if CONF.trove_dns_support:
|
||||||
dns_client = clients.create_dns_client(context)
|
dns_client = clients.create_dns_client(context)
|
||||||
@ -1762,19 +1772,32 @@ class Instances(object):
|
|||||||
db.server_status = "SHUTDOWN" # Fake it...
|
db.server_status = "SHUTDOWN" # Fake it...
|
||||||
db.addresses = []
|
db.addresses = []
|
||||||
|
|
||||||
# volumes = find_volumes(server.id)
|
|
||||||
datastore_status = InstanceServiceStatus.find_by(
|
datastore_status = InstanceServiceStatus.find_by(
|
||||||
instance_id=db.id)
|
instance_id=db.id)
|
||||||
if not datastore_status.status: # This should never happen.
|
if not datastore_status.status: # This should never happen.
|
||||||
LOG.error("Server status could not be read for "
|
LOG.error("Server status could not be read for "
|
||||||
"instance id(%s).", db.id)
|
"instance id(%s).", db.id)
|
||||||
continue
|
continue
|
||||||
LOG.debug("Server api_status(%s).",
|
|
||||||
datastore_status.status.api_status)
|
# Get the real-time service status.
|
||||||
|
LOG.debug('Task status for instance %s: %s', db.id,
|
||||||
|
db.task_status)
|
||||||
|
if db.task_status == InstanceTasks.NONE:
|
||||||
|
last_heartbeat_delta = (
|
||||||
|
timeutils.utcnow() - datastore_status.updated_at)
|
||||||
|
agent_expiry_interval = timedelta(
|
||||||
|
seconds=CONF.agent_heartbeat_expiry)
|
||||||
|
if last_heartbeat_delta > agent_expiry_interval:
|
||||||
|
LOG.warning(
|
||||||
|
'Guest agent heartbeat for instance %s has '
|
||||||
|
'expried', id)
|
||||||
|
datastore_status.status = \
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
except exception.ModelNotFoundError:
|
except exception.ModelNotFoundError:
|
||||||
LOG.error("Server status could not be read for "
|
LOG.error("Server status could not be read for "
|
||||||
"instance id(%s).", db.id)
|
"instance id(%s).", db.id)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
ret.append(load_instance(context, db, datastore_status,
|
ret.append(load_instance(context, db, datastore_status,
|
||||||
server=server))
|
server=server))
|
||||||
return ret
|
return ret
|
||||||
@ -2001,7 +2024,7 @@ class InstanceServiceStatus(dbmodels.DatabaseModelBase):
|
|||||||
def _validate(self, errors):
|
def _validate(self, errors):
|
||||||
if self.status is None:
|
if self.status is None:
|
||||||
errors['status'] = "Cannot be None."
|
errors['status'] = "Cannot be None."
|
||||||
if tr_instance.ServiceStatus.from_code(self.status_id) is None:
|
if srvstatus.ServiceStatus.from_code(self.status_id) is None:
|
||||||
errors['status_id'] = "Not valid."
|
errors['status_id'] = "Not valid."
|
||||||
|
|
||||||
def get_status(self):
|
def get_status(self):
|
||||||
@ -2012,7 +2035,7 @@ class InstanceServiceStatus(dbmodels.DatabaseModelBase):
|
|||||||
status of the service
|
status of the service
|
||||||
:rtype: trove.common.instance.ServiceStatus
|
:rtype: trove.common.instance.ServiceStatus
|
||||||
"""
|
"""
|
||||||
return tr_instance.ServiceStatus.from_code(self.status_id)
|
return srvstatus.ServiceStatus.from_code(self.status_id)
|
||||||
|
|
||||||
def set_status(self, value):
|
def set_status(self, value):
|
||||||
"""
|
"""
|
||||||
@ -2027,6 +2050,15 @@ class InstanceServiceStatus(dbmodels.DatabaseModelBase):
|
|||||||
self['updated_at'] = timeutils.utcnow()
|
self['updated_at'] = timeutils.utcnow()
|
||||||
return get_db_api().save(self)
|
return get_db_api().save(self)
|
||||||
|
|
||||||
|
def is_uptodate(self):
|
||||||
|
"""Check if the service status heartbeat is up to date."""
|
||||||
|
heartbeat_expiry = timedelta(seconds=CONF.agent_heartbeat_expiry)
|
||||||
|
last_update = (timeutils.utcnow() - self.updated_at)
|
||||||
|
if last_update < heartbeat_expiry:
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
status = property(get_status, set_status)
|
status = property(get_status, set_status)
|
||||||
|
|
||||||
|
|
||||||
@ -2039,6 +2071,6 @@ def persisted_models():
|
|||||||
|
|
||||||
|
|
||||||
MYSQL_RESPONSIVE_STATUSES = [
|
MYSQL_RESPONSIVE_STATUSES = [
|
||||||
tr_instance.ServiceStatuses.RUNNING,
|
srvstatus.ServiceStatuses.RUNNING,
|
||||||
tr_instance.ServiceStatuses.HEALTHY
|
srvstatus.ServiceStatuses.HEALTHY
|
||||||
]
|
]
|
||||||
|
@ -104,6 +104,7 @@ class ServiceStatuses(object):
|
|||||||
RESTART_REQUIRED = ServiceStatus(0x20, 'restart required',
|
RESTART_REQUIRED = ServiceStatus(0x20, 'restart required',
|
||||||
'RESTART_REQUIRED')
|
'RESTART_REQUIRED')
|
||||||
HEALTHY = ServiceStatus(0x21, 'healthy', 'HEALTHY')
|
HEALTHY = ServiceStatus(0x21, 'healthy', 'HEALTHY')
|
||||||
|
UPGRADING = ServiceStatus(0x22, 'upgrading', 'UPGRADING')
|
||||||
|
|
||||||
|
|
||||||
# Dissuade further additions at run-time.
|
# Dissuade further additions at run-time.
|
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
import copy
|
import copy
|
||||||
import os.path
|
import os.path
|
||||||
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
from cinderclient import exceptions as cinder_exceptions
|
from cinderclient import exceptions as cinder_exceptions
|
||||||
@ -21,7 +22,6 @@ from eventlet import greenthread
|
|||||||
from eventlet.timeout import Timeout
|
from eventlet.timeout import Timeout
|
||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
from swiftclient.client import ClientException
|
from swiftclient.client import ClientException
|
||||||
import time
|
|
||||||
|
|
||||||
from trove import rpc
|
from trove import rpc
|
||||||
from trove.backup import models as bkup_models
|
from trove.backup import models as bkup_models
|
||||||
@ -33,9 +33,7 @@ from trove.cluster.models import Cluster
|
|||||||
from trove.cluster.models import DBCluster
|
from trove.cluster.models import DBCluster
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import clients
|
from trove.common import clients
|
||||||
from trove.common import crypto_utils as cu
|
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common import instance as rd_instance
|
|
||||||
from trove.common import neutron
|
from trove.common import neutron
|
||||||
from trove.common import template
|
from trove.common import template
|
||||||
from trove.common import timeutils
|
from trove.common import timeutils
|
||||||
@ -51,7 +49,6 @@ from trove.common.exception import PollTimeOut
|
|||||||
from trove.common.exception import TroveError
|
from trove.common.exception import TroveError
|
||||||
from trove.common.exception import VolumeCreationFailure
|
from trove.common.exception import VolumeCreationFailure
|
||||||
from trove.common.i18n import _
|
from trove.common.i18n import _
|
||||||
from trove.common.instance import ServiceStatuses
|
|
||||||
from trove.common.notification import DBaaSInstanceRestart
|
from trove.common.notification import DBaaSInstanceRestart
|
||||||
from trove.common.notification import DBaaSInstanceUpgrade
|
from trove.common.notification import DBaaSInstanceUpgrade
|
||||||
from trove.common.notification import EndNotification
|
from trove.common.notification import EndNotification
|
||||||
@ -63,6 +60,7 @@ from trove.common.strategies.cluster import strategy
|
|||||||
from trove.common.utils import try_recover
|
from trove.common.utils import try_recover
|
||||||
from trove.extensions.mysql import models as mysql_models
|
from trove.extensions.mysql import models as mysql_models
|
||||||
from trove.instance import models as inst_models
|
from trove.instance import models as inst_models
|
||||||
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.instance.models import BuiltInstance
|
from trove.instance.models import BuiltInstance
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import FreshInstance
|
from trove.instance.models import FreshInstance
|
||||||
@ -202,24 +200,36 @@ class ClusterTasks(Cluster):
|
|||||||
shard_id=None):
|
shard_id=None):
|
||||||
"""Wait for all instances to get READY."""
|
"""Wait for all instances to get READY."""
|
||||||
return self._all_instances_acquire_status(
|
return self._all_instances_acquire_status(
|
||||||
instance_ids, cluster_id, shard_id, ServiceStatuses.INSTANCE_READY,
|
instance_ids, cluster_id, shard_id,
|
||||||
fast_fail_statuses=[ServiceStatuses.FAILED,
|
srvstatus.ServiceStatuses.INSTANCE_READY,
|
||||||
ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT])
|
fast_fail_statuses=[
|
||||||
|
srvstatus.ServiceStatuses.FAILED,
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
def _all_instances_shutdown(self, instance_ids, cluster_id,
|
def _all_instances_shutdown(self, instance_ids, cluster_id,
|
||||||
shard_id=None):
|
shard_id=None):
|
||||||
"""Wait for all instances to go SHUTDOWN."""
|
"""Wait for all instances to go SHUTDOWN."""
|
||||||
return self._all_instances_acquire_status(
|
return self._all_instances_acquire_status(
|
||||||
instance_ids, cluster_id, shard_id, ServiceStatuses.SHUTDOWN,
|
instance_ids, cluster_id, shard_id,
|
||||||
fast_fail_statuses=[ServiceStatuses.FAILED,
|
srvstatus.ServiceStatuses.SHUTDOWN,
|
||||||
ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT])
|
fast_fail_statuses=[
|
||||||
|
srvstatus.ServiceStatuses.FAILED,
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
def _all_instances_running(self, instance_ids, cluster_id, shard_id=None):
|
def _all_instances_running(self, instance_ids, cluster_id, shard_id=None):
|
||||||
"""Wait for all instances to become ACTIVE."""
|
"""Wait for all instances to become ACTIVE."""
|
||||||
return self._all_instances_acquire_status(
|
return self._all_instances_acquire_status(
|
||||||
instance_ids, cluster_id, shard_id, ServiceStatuses.RUNNING,
|
instance_ids, cluster_id, shard_id,
|
||||||
fast_fail_statuses=[ServiceStatuses.FAILED,
|
srvstatus.ServiceStatuses.RUNNING,
|
||||||
ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT])
|
fast_fail_statuses=[
|
||||||
|
srvstatus.ServiceStatuses.FAILED,
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
def _all_instances_acquire_status(
|
def _all_instances_acquire_status(
|
||||||
self, instance_ids, cluster_id, shard_id, expected_status,
|
self, instance_ids, cluster_id, shard_id, expected_status,
|
||||||
@ -427,7 +437,10 @@ class FreshInstanceTasks(FreshInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
utils.poll_until(self._service_is_active,
|
utils.poll_until(self._service_is_active,
|
||||||
sleep_time=CONF.usage_sleep_time,
|
sleep_time=CONF.usage_sleep_time,
|
||||||
time_out=timeout)
|
time_out=timeout)
|
||||||
|
|
||||||
LOG.info("Created instance %s successfully.", self.id)
|
LOG.info("Created instance %s successfully.", self.id)
|
||||||
|
if not self.db_info.task_status.is_error:
|
||||||
|
self.reset_task_status()
|
||||||
TroveInstanceCreate(instance=self,
|
TroveInstanceCreate(instance=self,
|
||||||
instance_size=flavor['ram']).notify()
|
instance_size=flavor['ram']).notify()
|
||||||
except (TroveError, PollTimeOut) as ex:
|
except (TroveError, PollTimeOut) as ex:
|
||||||
@ -582,9 +595,6 @@ class FreshInstanceTasks(FreshInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
if root_password:
|
if root_password:
|
||||||
self.report_root_enabled()
|
self.report_root_enabled()
|
||||||
|
|
||||||
if not self.db_info.task_status.is_error:
|
|
||||||
self.reset_task_status()
|
|
||||||
|
|
||||||
# when DNS is supported, we attempt to add this after the
|
# when DNS is supported, we attempt to add this after the
|
||||||
# instance is prepared. Otherwise, if DNS fails, instances
|
# instance is prepared. Otherwise, if DNS fails, instances
|
||||||
# end up in a poorer state and there's no tooling around
|
# end up in a poorer state and there's no tooling around
|
||||||
@ -723,12 +733,13 @@ class FreshInstanceTasks(FreshInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
if CONF.update_status_on_fail:
|
if CONF.update_status_on_fail:
|
||||||
# Updating service status
|
# Updating service status
|
||||||
service = InstanceServiceStatus.find_by(instance_id=self.id)
|
service = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
service.set_status(ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT)
|
service.set_status(
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT)
|
||||||
service.save()
|
service.save()
|
||||||
LOG.error(
|
LOG.error(
|
||||||
"Service status: %s, service error description: %s",
|
"Service status: %s, service error description: %s",
|
||||||
ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT.api_status,
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT.api_status,
|
||||||
ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT.description
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT.description
|
||||||
)
|
)
|
||||||
|
|
||||||
# Updating instance status
|
# Updating instance status
|
||||||
@ -756,14 +767,14 @@ class FreshInstanceTasks(FreshInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
service = InstanceServiceStatus.find_by(instance_id=self.id)
|
service = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
status = service.get_status()
|
status = service.get_status()
|
||||||
|
|
||||||
if (status == rd_instance.ServiceStatuses.RUNNING or
|
if (status == srvstatus.ServiceStatuses.RUNNING or
|
||||||
status == rd_instance.ServiceStatuses.INSTANCE_READY or
|
status == srvstatus.ServiceStatuses.INSTANCE_READY or
|
||||||
status == rd_instance.ServiceStatuses.HEALTHY):
|
status == srvstatus.ServiceStatuses.HEALTHY):
|
||||||
return True
|
return True
|
||||||
elif status not in [rd_instance.ServiceStatuses.NEW,
|
elif status not in [srvstatus.ServiceStatuses.NEW,
|
||||||
rd_instance.ServiceStatuses.BUILDING,
|
srvstatus.ServiceStatuses.BUILDING,
|
||||||
rd_instance.ServiceStatuses.UNKNOWN,
|
srvstatus.ServiceStatuses.UNKNOWN,
|
||||||
rd_instance.ServiceStatuses.DELETED]:
|
srvstatus.ServiceStatuses.DELETED]:
|
||||||
raise TroveError(_("Service not active, status: %s") % status)
|
raise TroveError(_("Service not active, status: %s") % status)
|
||||||
|
|
||||||
c_id = self.db_info.compute_instance_id
|
c_id = self.db_info.compute_instance_id
|
||||||
@ -1069,6 +1080,27 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
associated with a compute server.
|
associated with a compute server.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
def is_service_healthy(self):
|
||||||
|
"""Wait for the db service up and running.
|
||||||
|
|
||||||
|
This method is supposed to be called with poll_until against an
|
||||||
|
existing db instance.
|
||||||
|
"""
|
||||||
|
service = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
|
status = service.get_status()
|
||||||
|
|
||||||
|
if service.is_uptodate():
|
||||||
|
if status in [srvstatus.ServiceStatuses.HEALTHY]:
|
||||||
|
return True
|
||||||
|
elif status in [
|
||||||
|
srvstatus.ServiceStatuses.FAILED,
|
||||||
|
srvstatus.ServiceStatuses.UNKNOWN,
|
||||||
|
srvstatus.ServiceStatuses.FAILED_TIMEOUT_GUESTAGENT
|
||||||
|
]:
|
||||||
|
raise TroveError('Database service error, status: %s' % status)
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
def resize_volume(self, new_size):
|
def resize_volume(self, new_size):
|
||||||
LOG.info("Resizing volume for instance %(instance_id)s from "
|
LOG.info("Resizing volume for instance %(instance_id)s from "
|
||||||
"%(old_size)s GB to %(new_size)s GB.",
|
"%(old_size)s GB to %(new_size)s GB.",
|
||||||
@ -1219,6 +1251,10 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
LOG.info("Starting database on instance %s.", self.id)
|
LOG.info("Starting database on instance %s.", self.id)
|
||||||
self.guest.restart()
|
self.guest.restart()
|
||||||
|
|
||||||
|
# Wait for database service up and running
|
||||||
|
utils.poll_until(self.is_service_healthy,
|
||||||
|
time_out=CONF.report_interval * 2)
|
||||||
|
|
||||||
LOG.info("Rebooted instance %s successfully.", self.id)
|
LOG.info("Rebooted instance %s successfully.", self.id)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.error("Failed to reboot instance %(id)s: %(e)s",
|
LOG.error("Failed to reboot instance %(id)s: %(e)s",
|
||||||
@ -1276,79 +1312,40 @@ class BuiltInstanceTasks(BuiltInstance, NotifyMixin, ConfigurationMixin):
|
|||||||
This does not change the reference for this BuiltInstanceTask
|
This does not change the reference for this BuiltInstanceTask
|
||||||
"""
|
"""
|
||||||
datastore_status = InstanceServiceStatus.find_by(instance_id=self.id)
|
datastore_status = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
datastore_status.status = rd_instance.ServiceStatuses.PAUSED
|
datastore_status.status = srvstatus.ServiceStatuses.PAUSED
|
||||||
datastore_status.save()
|
datastore_status.save()
|
||||||
|
|
||||||
def upgrade(self, datastore_version):
|
def upgrade(self, datastore_version):
|
||||||
LOG.info("Upgrading instance %s to new datastore version %s",
|
LOG.info("Upgrading instance %s to new datastore version %s",
|
||||||
self.id, datastore_version)
|
self.id, datastore_version)
|
||||||
|
|
||||||
def server_finished_rebuilding():
|
self.set_service_status(srvstatus.ServiceStatuses.UPGRADING)
|
||||||
self.refresh_compute_server_info()
|
|
||||||
return not self.server_status_matches(['REBUILD'])
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
upgrade_info = self.guest.pre_upgrade()
|
upgrade_info = self.guest.pre_upgrade()
|
||||||
|
upgrade_info = upgrade_info if upgrade_info else {}
|
||||||
|
upgrade_info.update({'datastore_version': datastore_version.name})
|
||||||
|
self.guest.upgrade(upgrade_info)
|
||||||
|
|
||||||
if self.volume_id:
|
# Wait for db instance healthy
|
||||||
volume = self.volume_client.volumes.get(self.volume_id)
|
LOG.info('Waiting for instance %s to be healthy after upgrading',
|
||||||
volume_device = self._fix_device_path(
|
self.id)
|
||||||
volume.attachments[0]['device'])
|
utils.poll_until(self.is_service_healthy, time_out=600,
|
||||||
if volume:
|
sleep_time=5)
|
||||||
upgrade_info['device'] = volume_device
|
|
||||||
|
|
||||||
# BUG(1650518): Cleanup in the Pike release some instances
|
|
||||||
# that we will be upgrading will be pre secureserialier
|
|
||||||
# and will have no instance_key entries. If this is one of
|
|
||||||
# those instances, make a key. That will make it appear in
|
|
||||||
# the injected files that are generated next. From this
|
|
||||||
# point, and until the guest comes up, attempting to send
|
|
||||||
# messages to it will fail because the RPC framework will
|
|
||||||
# encrypt messages to a guest which potentially doesn't
|
|
||||||
# have the code to handle it.
|
|
||||||
if CONF.enable_secure_rpc_messaging and (
|
|
||||||
self.db_info.encrypted_key is None):
|
|
||||||
encrypted_key = cu.encode_data(cu.encrypt_data(
|
|
||||||
cu.generate_random_key(),
|
|
||||||
CONF.inst_rpc_key_encr_key))
|
|
||||||
self.update_db(encrypted_key=encrypted_key)
|
|
||||||
LOG.debug("Generated unique RPC encryption key for "
|
|
||||||
"instance = %(id)s, key = %(key)s",
|
|
||||||
{'id': self.id, 'key': encrypted_key})
|
|
||||||
|
|
||||||
injected_files = self.get_injected_files(
|
|
||||||
datastore_version.manager)
|
|
||||||
LOG.debug("Rebuilding instance %(instance)s with image %(image)s.",
|
|
||||||
{'instance': self, 'image': datastore_version.image_id})
|
|
||||||
self.server.rebuild(datastore_version.image_id,
|
|
||||||
files=injected_files)
|
|
||||||
utils.poll_until(
|
|
||||||
server_finished_rebuilding,
|
|
||||||
sleep_time=5, time_out=600)
|
|
||||||
|
|
||||||
if not self.server_status_matches(['ACTIVE']):
|
|
||||||
raise TroveError(_("Instance %(instance)s failed to "
|
|
||||||
"upgrade to %(datastore_version)s"),
|
|
||||||
instance=self,
|
|
||||||
datastore_version=datastore_version)
|
|
||||||
|
|
||||||
LOG.info('Finished rebuilding server for instance %s', self.id)
|
|
||||||
|
|
||||||
self.guest.post_upgrade(upgrade_info)
|
|
||||||
|
|
||||||
self.reset_task_status()
|
self.reset_task_status()
|
||||||
LOG.info("Finished upgrading instance %s to new datastore "
|
LOG.info("Finished upgrading instance %s to new datastore "
|
||||||
"version %s",
|
"version %s", self.id, datastore_version)
|
||||||
self.id, datastore_version)
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception(e)
|
LOG.error('Failed to upgrade instance %s, error: %s', self.id, e)
|
||||||
err = inst_models.InstanceTasks.BUILDING_ERROR_SERVER
|
self.update_db(
|
||||||
self.update_db(task_status=err)
|
task_status=inst_models.InstanceTasks.BUILDING_ERROR_SERVER)
|
||||||
raise e
|
|
||||||
|
|
||||||
# Some cinder drivers appear to return "vdb" instead of "/dev/vdb".
|
|
||||||
# We need to account for that.
|
|
||||||
def _fix_device_path(self, device):
|
def _fix_device_path(self, device):
|
||||||
|
"""Get correct device path.
|
||||||
|
|
||||||
|
Some cinder drivers appear to return "vdb" instead of "/dev/vdb".
|
||||||
|
"""
|
||||||
if device.startswith("/dev"):
|
if device.startswith("/dev"):
|
||||||
return device
|
return device
|
||||||
else:
|
else:
|
||||||
@ -1515,7 +1512,7 @@ class ResizeVolumeAction(object):
|
|||||||
"status to failed.", {'func': orig_func.__name__,
|
"status to failed.", {'func': orig_func.__name__,
|
||||||
'id': self.instance.id})
|
'id': self.instance.id})
|
||||||
service = InstanceServiceStatus.find_by(instance_id=self.instance.id)
|
service = InstanceServiceStatus.find_by(instance_id=self.instance.id)
|
||||||
service.set_status(ServiceStatuses.FAILED)
|
service.set_status(srvstatus.ServiceStatuses.FAILED)
|
||||||
service.save()
|
service.save()
|
||||||
|
|
||||||
def _recover_restart(self, orig_func):
|
def _recover_restart(self, orig_func):
|
||||||
@ -1790,7 +1787,7 @@ class ResizeActionBase(object):
|
|||||||
def _datastore_is_offline(self):
|
def _datastore_is_offline(self):
|
||||||
self.instance._refresh_datastore_status()
|
self.instance._refresh_datastore_status()
|
||||||
return (self.instance.datastore_status_matches(
|
return (self.instance.datastore_status_matches(
|
||||||
rd_instance.ServiceStatuses.SHUTDOWN))
|
srvstatus.ServiceStatuses.SHUTDOWN))
|
||||||
|
|
||||||
def _revert_nova_action(self):
|
def _revert_nova_action(self):
|
||||||
LOG.debug("Instance %s calling Compute revert resize...",
|
LOG.debug("Instance %s calling Compute revert resize...",
|
||||||
@ -1811,7 +1808,7 @@ class ResizeActionBase(object):
|
|||||||
def _guest_is_awake(self):
|
def _guest_is_awake(self):
|
||||||
self.instance._refresh_datastore_status()
|
self.instance._refresh_datastore_status()
|
||||||
return not self.instance.datastore_status_matches(
|
return not self.instance.datastore_status_matches(
|
||||||
rd_instance.ServiceStatuses.PAUSED)
|
srvstatus.ServiceStatuses.PAUSED)
|
||||||
|
|
||||||
def _perform_nova_action(self):
|
def _perform_nova_action(self):
|
||||||
"""Calls Nova to resize or migrate an instance, and confirms."""
|
"""Calls Nova to resize or migrate an instance, and confirms."""
|
||||||
|
@ -267,6 +267,15 @@ class RebootTestBase(ActionTestBase):
|
|||||||
|
|
||||||
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
|
||||||
|
|
||||||
|
def wait_for_status(self, status, timeout=60):
|
||||||
|
def is_status():
|
||||||
|
instance = self.instance
|
||||||
|
if instance.status in status:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
poll_until(is_status, time_out=timeout)
|
||||||
|
|
||||||
|
|
||||||
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
|
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
|
||||||
depends_on_groups=[tests.DBAAS_API_DATABASES],
|
depends_on_groups=[tests.DBAAS_API_DATABASES],
|
||||||
@ -312,9 +321,9 @@ class StopTests(RebootTestBase):
|
|||||||
|
|
||||||
@test(depends_on=[test_ensure_mysql_is_running])
|
@test(depends_on=[test_ensure_mysql_is_running])
|
||||||
def test_stop_mysql(self):
|
def test_stop_mysql(self):
|
||||||
"""Stops MySQL."""
|
"""Stops MySQL by admin."""
|
||||||
instance_info.dbaas_admin.management.stop(self.instance_id)
|
instance_info.dbaas_admin.management.stop(self.instance_id)
|
||||||
self.wait_for_failure_status()
|
self.wait_for_status(['SHUTDOWN'], timeout=60)
|
||||||
|
|
||||||
@test(depends_on=[test_stop_mysql])
|
@test(depends_on=[test_stop_mysql])
|
||||||
def test_volume_info_while_mysql_is_down(self):
|
def test_volume_info_while_mysql_is_down(self):
|
||||||
|
@ -13,21 +13,21 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
from novaclient.exceptions import BadRequest
|
|
||||||
from novaclient.v2.servers import Server
|
|
||||||
from unittest import mock
|
from unittest import mock
|
||||||
|
|
||||||
|
from novaclient.exceptions import BadRequest
|
||||||
|
from novaclient.v2.servers import Server
|
||||||
from oslo_messaging._drivers.common import RPCException
|
from oslo_messaging._drivers.common import RPCException
|
||||||
from proboscis import test
|
from proboscis import test
|
||||||
from testtools import TestCase
|
from testtools import TestCase
|
||||||
|
|
||||||
from trove.common.exception import PollTimeOut
|
|
||||||
from trove.common.exception import TroveError
|
|
||||||
from trove.common import instance as rd_instance
|
|
||||||
from trove.common import template
|
from trove.common import template
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
|
from trove.common.exception import PollTimeOut
|
||||||
|
from trove.common.exception import TroveError
|
||||||
from trove.datastore.models import DatastoreVersion
|
from trove.datastore.models import DatastoreVersion
|
||||||
from trove.guestagent import api as guest
|
from trove.guestagent import api as guest
|
||||||
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
@ -63,7 +63,7 @@ class ResizeTestBase(TestCase):
|
|||||||
self.server,
|
self.server,
|
||||||
datastore_status=InstanceServiceStatus.create(
|
datastore_status=InstanceServiceStatus.create(
|
||||||
instance_id=self.db_info.id,
|
instance_id=self.db_info.id,
|
||||||
status=rd_instance.ServiceStatuses.RUNNING))
|
status=srvstatus.ServiceStatuses.RUNNING))
|
||||||
self.instance.server.flavor = {'id': OLD_FLAVOR_ID}
|
self.instance.server.flavor = {'id': OLD_FLAVOR_ID}
|
||||||
self.guest = mock.MagicMock(spec=guest.API)
|
self.guest = mock.MagicMock(spec=guest.API)
|
||||||
self.instance._guest = self.guest
|
self.instance._guest = self.guest
|
||||||
@ -124,7 +124,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_nova_wont_resize(self):
|
def test_nova_wont_resize(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
self.server.resize.side_effect = BadRequest(400)
|
self.server.resize.side_effect = BadRequest(400)
|
||||||
self.server.status = "ACTIVE"
|
self.server.status = "ACTIVE"
|
||||||
self.assertRaises(BadRequest, self.action.execute)
|
self.assertRaises(BadRequest, self.action.execute)
|
||||||
@ -135,7 +135,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_nova_resize_timeout(self):
|
def test_nova_resize_timeout(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
self.server.status = "ACTIVE"
|
self.server.status = "ACTIVE"
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
@ -150,7 +150,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_nova_doesnt_change_flavor(self):
|
def test_nova_doesnt_change_flavor(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -177,7 +177,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_nova_resize_fails(self):
|
def test_nova_resize_fails(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -200,7 +200,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_nova_resizes_in_weird_state(self):
|
def test_nova_resizes_in_weird_state(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -224,7 +224,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_guest_is_not_okay(self):
|
def test_guest_is_not_okay(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -237,7 +237,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
|
|
||||||
self.instance.set_datastore_status_to_paused.side_effect = (
|
self.instance.set_datastore_status_to_paused.side_effect = (
|
||||||
lambda: self._datastore_changes_to(
|
lambda: self._datastore_changes_to(
|
||||||
rd_instance.ServiceStatuses.PAUSED))
|
srvstatus.ServiceStatuses.PAUSED))
|
||||||
|
|
||||||
self.assertRaises(PollTimeOut, self.action.execute)
|
self.assertRaises(PollTimeOut, self.action.execute)
|
||||||
|
|
||||||
@ -257,7 +257,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_mysql_is_not_okay(self):
|
def test_mysql_is_not_okay(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -269,7 +269,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
|
|
||||||
self.instance.set_datastore_status_to_paused.side_effect = (
|
self.instance.set_datastore_status_to_paused.side_effect = (
|
||||||
lambda: self._datastore_changes_to(
|
lambda: self._datastore_changes_to(
|
||||||
rd_instance.ServiceStatuses.SHUTDOWN))
|
srvstatus.ServiceStatuses.SHUTDOWN))
|
||||||
|
|
||||||
self._start_mysql()
|
self._start_mysql()
|
||||||
self.assertRaises(PollTimeOut, self.action.execute)
|
self.assertRaises(PollTimeOut, self.action.execute)
|
||||||
@ -290,7 +290,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_confirm_resize_fails(self):
|
def test_confirm_resize_fails(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -303,7 +303,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
|
|
||||||
self.instance.set_datastore_status_to_paused.side_effect = (
|
self.instance.set_datastore_status_to_paused.side_effect = (
|
||||||
lambda: self._datastore_changes_to(
|
lambda: self._datastore_changes_to(
|
||||||
rd_instance.ServiceStatuses.RUNNING))
|
srvstatus.ServiceStatuses.RUNNING))
|
||||||
self.server.confirm_resize.side_effect = BadRequest(400)
|
self.server.confirm_resize.side_effect = BadRequest(400)
|
||||||
|
|
||||||
self._start_mysql()
|
self._start_mysql()
|
||||||
@ -322,7 +322,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
task_status=InstanceTasks.NONE)
|
task_status=InstanceTasks.NONE)
|
||||||
|
|
||||||
def test_revert_nova_fails(self):
|
def test_revert_nova_fails(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -335,7 +335,7 @@ class ResizeTests(ResizeTestBase):
|
|||||||
|
|
||||||
self.instance.set_datastore_status_to_paused.side_effect = (
|
self.instance.set_datastore_status_to_paused.side_effect = (
|
||||||
lambda: self._datastore_changes_to(
|
lambda: self._datastore_changes_to(
|
||||||
rd_instance.ServiceStatuses.PAUSED))
|
srvstatus.ServiceStatuses.PAUSED))
|
||||||
|
|
||||||
self.assertRaises(PollTimeOut, self.action.execute)
|
self.assertRaises(PollTimeOut, self.action.execute)
|
||||||
|
|
||||||
@ -363,7 +363,7 @@ class MigrateTests(ResizeTestBase):
|
|||||||
self.action = models.MigrateAction(self.instance)
|
self.action = models.MigrateAction(self.instance)
|
||||||
|
|
||||||
def test_successful_migrate(self):
|
def test_successful_migrate(self):
|
||||||
self._datastore_changes_to(rd_instance.ServiceStatuses.SHUTDOWN)
|
self._datastore_changes_to(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
|
|
||||||
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
with mock.patch.object(utils, 'poll_until') as mock_poll_until:
|
||||||
self.poll_until_side_effects.extend([
|
self.poll_until_side_effects.extend([
|
||||||
@ -375,7 +375,7 @@ class MigrateTests(ResizeTestBase):
|
|||||||
|
|
||||||
self.instance.set_datastore_status_to_paused.side_effect = (
|
self.instance.set_datastore_status_to_paused.side_effect = (
|
||||||
lambda: self._datastore_changes_to(
|
lambda: self._datastore_changes_to(
|
||||||
rd_instance.ServiceStatuses.RUNNING))
|
srvstatus.ServiceStatuses.RUNNING))
|
||||||
|
|
||||||
self.action.execute()
|
self.action.execute()
|
||||||
|
|
||||||
|
@ -12,23 +12,24 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
|
from unittest import mock
|
||||||
|
|
||||||
from novaclient.v2.servers import Server
|
from novaclient.v2.servers import Server
|
||||||
from proboscis import after_class
|
from proboscis import after_class
|
||||||
from proboscis.asserts import assert_equal
|
|
||||||
from proboscis.asserts import assert_raises
|
|
||||||
from proboscis import before_class
|
from proboscis import before_class
|
||||||
from proboscis import SkipTest
|
from proboscis import SkipTest
|
||||||
from proboscis import test
|
from proboscis import test
|
||||||
from unittest import mock
|
from proboscis.asserts import assert_equal
|
||||||
|
from proboscis.asserts import assert_raises
|
||||||
|
|
||||||
from trove.backup import models as backup_models
|
from trove.backup import models as backup_models
|
||||||
from trove.backup import state
|
from trove.backup import state
|
||||||
from trove.common.context import TroveContext
|
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
import trove.common.instance as tr_instance
|
from trove.common.context import TroveContext
|
||||||
from trove.extensions.mgmt.instances.models import MgmtInstance
|
from trove.extensions.mgmt.instances.models import MgmtInstance
|
||||||
from trove.extensions.mgmt.instances.service import MgmtInstanceController
|
from trove.extensions.mgmt.instances.service import MgmtInstanceController
|
||||||
from trove.instance import models as imodels
|
from trove.instance import models as imodels
|
||||||
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove.tests.config import CONFIG
|
from trove.tests.config import CONFIG
|
||||||
@ -65,7 +66,7 @@ class MgmtInstanceBase(object):
|
|||||||
self.db_info,
|
self.db_info,
|
||||||
self.server,
|
self.server,
|
||||||
datastore_status=imodels.InstanceServiceStatus(
|
datastore_status=imodels.InstanceServiceStatus(
|
||||||
tr_instance.ServiceStatuses.RUNNING))
|
srvstatus.ServiceStatuses.RUNNING))
|
||||||
|
|
||||||
def _make_request(self, path='/', context=None, **kwargs):
|
def _make_request(self, path='/', context=None, **kwargs):
|
||||||
from webob import Request
|
from webob import Request
|
||||||
|
@ -20,7 +20,7 @@ import eventlet
|
|||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
|
|
||||||
from trove.common import exception as rd_exception
|
from trove.common import exception as rd_exception
|
||||||
from trove.common import instance as rd_instance
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.tests.util import unquote_user_host
|
from trove.tests.util import unquote_user_host
|
||||||
|
|
||||||
DB = {}
|
DB = {}
|
||||||
@ -236,9 +236,9 @@ class FakeGuest(object):
|
|||||||
def update_db():
|
def update_db():
|
||||||
status = InstanceServiceStatus.find_by(instance_id=self.id)
|
status = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
if instance_name.endswith('GUEST_ERROR'):
|
if instance_name.endswith('GUEST_ERROR'):
|
||||||
status.status = rd_instance.ServiceStatuses.FAILED
|
status.status = srvstatus.ServiceStatuses.FAILED
|
||||||
else:
|
else:
|
||||||
status.status = rd_instance.ServiceStatuses.HEALTHY
|
status.status = srvstatus.ServiceStatuses.HEALTHY
|
||||||
status.save()
|
status.save()
|
||||||
AgentHeartBeat.create(instance_id=self.id)
|
AgentHeartBeat.create(instance_id=self.id)
|
||||||
eventlet.spawn_after(3.5, update_db)
|
eventlet.spawn_after(3.5, update_db)
|
||||||
@ -246,8 +246,8 @@ class FakeGuest(object):
|
|||||||
def _set_task_status(self, new_status='HEALTHY'):
|
def _set_task_status(self, new_status='HEALTHY'):
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
print("Setting status to %s" % new_status)
|
print("Setting status to %s" % new_status)
|
||||||
states = {'HEALTHY': rd_instance.ServiceStatuses.HEALTHY,
|
states = {'HEALTHY': srvstatus.ServiceStatuses.HEALTHY,
|
||||||
'SHUTDOWN': rd_instance.ServiceStatuses.SHUTDOWN,
|
'SHUTDOWN': srvstatus.ServiceStatuses.SHUTDOWN,
|
||||||
}
|
}
|
||||||
status = InstanceServiceStatus.find_by(instance_id=self.id)
|
status = InstanceServiceStatus.find_by(instance_id=self.id)
|
||||||
status.status = states[new_status]
|
status.status = states[new_status]
|
||||||
|
@ -13,18 +13,17 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
|
import collections
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
import eventlet
|
||||||
from novaclient import exceptions as nova_exceptions
|
from novaclient import exceptions as nova_exceptions
|
||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
|
|
||||||
from trove.common.exception import PollTimeOut
|
from trove.common.exception import PollTimeOut
|
||||||
from trove.common import instance as rd_instance
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.tests.fakes.common import authorize
|
from trove.tests.fakes.common import authorize
|
||||||
|
|
||||||
import collections
|
|
||||||
import eventlet
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
FAKE_HOSTS = ["fake_host_1", "fake_host_2"]
|
FAKE_HOSTS = ["fake_host_1", "fake_host_2"]
|
||||||
|
|
||||||
@ -326,7 +325,7 @@ class FakeServers(object):
|
|||||||
instance = DBInstance.find_by(compute_instance_id=id)
|
instance = DBInstance.find_by(compute_instance_id=id)
|
||||||
LOG.debug("Setting server %s to running", instance.id)
|
LOG.debug("Setting server %s to running", instance.id)
|
||||||
status = InstanceServiceStatus.find_by(instance_id=instance.id)
|
status = InstanceServiceStatus.find_by(instance_id=instance.id)
|
||||||
status.status = rd_instance.ServiceStatuses.RUNNING
|
status.status = srvstatus.ServiceStatuses.RUNNING
|
||||||
status.save()
|
status.save()
|
||||||
eventlet.spawn_after(time_from_now, set_server_running)
|
eventlet.spawn_after(time_from_now, set_server_running)
|
||||||
|
|
||||||
|
@ -18,14 +18,13 @@ from oslo_utils import timeutils
|
|||||||
from trove.backup import models as bkup_models
|
from trove.backup import models as bkup_models
|
||||||
from trove.backup import state
|
from trove.backup import state
|
||||||
from trove.common import exception as t_exception
|
from trove.common import exception as t_exception
|
||||||
from trove.common.instance import ServiceStatuses
|
|
||||||
from trove.common import utils
|
from trove.common import utils
|
||||||
from trove.conductor import manager as conductor_manager
|
from trove.conductor import manager as conductor_manager
|
||||||
from trove.instance import models as t_models
|
from trove.instance import models as t_models
|
||||||
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
from trove.tests.unittests.util import util
|
from trove.tests.unittests.util import util
|
||||||
|
|
||||||
|
|
||||||
# See LP bug #1255178
|
# See LP bug #1255178
|
||||||
OLD_DBB_SAVE = bkup_models.DBBackup.save
|
OLD_DBB_SAVE = bkup_models.DBBackup.save
|
||||||
|
|
||||||
|
@ -11,15 +11,14 @@
|
|||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
from mock import Mock
|
||||||
|
from mock import patch
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
from mock import Mock, patch
|
|
||||||
|
|
||||||
from trove.backup import models as backup_models
|
from trove.backup import models as backup_models
|
||||||
from trove.common import cfg
|
from trove.common import cfg
|
||||||
from trove.common import clients
|
from trove.common import clients
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common.instance import ServiceStatuses
|
|
||||||
from trove.common import neutron
|
from trove.common import neutron
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
from trove.instance import models
|
from trove.instance import models
|
||||||
@ -29,6 +28,7 @@ from trove.instance.models import Instance
|
|||||||
from trove.instance.models import instance_encryption_key_cache
|
from trove.instance.models import instance_encryption_key_cache
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import SimpleInstance
|
from trove.instance.models import SimpleInstance
|
||||||
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove.taskmanager import api as task_api
|
from trove.taskmanager import api as task_api
|
||||||
from trove.tests.fakes import nova
|
from trove.tests.fakes import nova
|
||||||
|
@ -13,15 +13,16 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
#
|
#
|
||||||
from trove.common.instance import ServiceStatuses
|
import uuid
|
||||||
|
|
||||||
from trove.datastore import models
|
from trove.datastore import models
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import InstanceStatus
|
from trove.instance.models import InstanceStatus
|
||||||
from trove.instance.models import SimpleInstance
|
from trove.instance.models import SimpleInstance
|
||||||
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
from trove.tests.unittests.util import util
|
from trove.tests.unittests.util import util
|
||||||
import uuid
|
|
||||||
|
|
||||||
|
|
||||||
class FakeInstanceTask(object):
|
class FakeInstanceTask(object):
|
||||||
|
@ -13,26 +13,32 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
#
|
#
|
||||||
|
from testtools.matchers import Equals
|
||||||
|
from testtools.matchers import Is
|
||||||
|
from testtools.matchers import Not
|
||||||
import uuid
|
import uuid
|
||||||
|
|
||||||
from mock import MagicMock, patch, ANY
|
from mock import ANY
|
||||||
|
from mock import MagicMock
|
||||||
|
from mock import patch
|
||||||
from novaclient.client import Client
|
from novaclient.client import Client
|
||||||
from novaclient.v2.flavors import FlavorManager, Flavor
|
from novaclient.v2.flavors import Flavor
|
||||||
from novaclient.v2.servers import Server, ServerManager
|
from novaclient.v2.flavors import FlavorManager
|
||||||
|
from novaclient.v2.servers import Server
|
||||||
|
from novaclient.v2.servers import ServerManager
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
from testtools.matchers import Equals, Is, Not
|
|
||||||
|
|
||||||
|
from trove import rpc
|
||||||
from trove.backup.models import Backup
|
from trove.backup.models import Backup
|
||||||
from trove.common import clients
|
from trove.common import clients
|
||||||
from trove.common import exception
|
from trove.common import exception
|
||||||
from trove.common import instance as rd_instance
|
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
import trove.extensions.mgmt.instances.models as mgmtmodels
|
import trove.extensions.mgmt.instances.models as mgmtmodels
|
||||||
from trove.guestagent.api import API
|
from trove.guestagent.api import API
|
||||||
|
from trove.instance import service_status as srvstatus
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove import rpc
|
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
from trove.tests.unittests.util import util
|
from trove.tests.unittests.util import util
|
||||||
|
|
||||||
@ -98,12 +104,12 @@ class MockMgmtInstanceTest(trove_testtools.TestCase):
|
|||||||
compute_instance_id='compute_id_1',
|
compute_instance_id='compute_id_1',
|
||||||
server_id='server_id_1',
|
server_id='server_id_1',
|
||||||
tenant_id='tenant_id_1',
|
tenant_id='tenant_id_1',
|
||||||
server_status=rd_instance.ServiceStatuses.
|
server_status=srvstatus.ServiceStatuses.
|
||||||
BUILDING.api_status,
|
BUILDING.api_status,
|
||||||
deleted=False)
|
deleted=False)
|
||||||
instance.save()
|
instance.save()
|
||||||
service_status = InstanceServiceStatus(
|
service_status = InstanceServiceStatus(
|
||||||
rd_instance.ServiceStatuses.RUNNING,
|
srvstatus.ServiceStatuses.RUNNING,
|
||||||
id=str(uuid.uuid4()),
|
id=str(uuid.uuid4()),
|
||||||
instance_id=instance.id,
|
instance_id=instance.id,
|
||||||
)
|
)
|
||||||
@ -122,7 +128,7 @@ class TestNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
|
|
||||||
@patch('trove.instance.models.LOG')
|
@patch('trove.instance.models.LOG')
|
||||||
def test_transformer(self, mock_logging):
|
def test_transformer(self, mock_logging):
|
||||||
status = rd_instance.ServiceStatuses.BUILDING.api_status
|
status = srvstatus.ServiceStatuses.BUILDING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, InstanceTasks.BUILDING)
|
status, InstanceTasks.BUILDING)
|
||||||
payloads = mgmtmodels.NotificationTransformer(
|
payloads = mgmtmodels.NotificationTransformer(
|
||||||
@ -184,7 +190,7 @@ class TestNovaNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
Equals('unknown'))
|
Equals('unknown'))
|
||||||
|
|
||||||
def test_transformer(self):
|
def test_transformer(self):
|
||||||
status = rd_instance.ServiceStatuses.BUILDING.api_status
|
status = srvstatus.ServiceStatuses.BUILDING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, InstanceTasks.BUILDING)
|
status, InstanceTasks.BUILDING)
|
||||||
|
|
||||||
@ -223,7 +229,7 @@ class TestNovaNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
|
|
||||||
@patch('trove.extensions.mgmt.instances.models.LOG')
|
@patch('trove.extensions.mgmt.instances.models.LOG')
|
||||||
def test_transformer_invalid_datastore_manager(self, mock_logging):
|
def test_transformer_invalid_datastore_manager(self, mock_logging):
|
||||||
status = rd_instance.ServiceStatuses.BUILDING.api_status
|
status = srvstatus.ServiceStatuses.BUILDING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, InstanceTasks.BUILDING)
|
status, InstanceTasks.BUILDING)
|
||||||
version = datastore_models.DBDatastoreVersion.get_by(
|
version = datastore_models.DBDatastoreVersion.get_by(
|
||||||
@ -268,9 +274,9 @@ class TestNovaNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
self.addCleanup(self.do_cleanup, instance, service_status)
|
self.addCleanup(self.do_cleanup, instance, service_status)
|
||||||
|
|
||||||
def test_transformer_shutdown_instance(self):
|
def test_transformer_shutdown_instance(self):
|
||||||
status = rd_instance.ServiceStatuses.SHUTDOWN.api_status
|
status = srvstatus.ServiceStatuses.SHUTDOWN.api_status
|
||||||
instance, service_status = self.build_db_instance(status)
|
instance, service_status = self.build_db_instance(status)
|
||||||
service_status.set_status(rd_instance.ServiceStatuses.SHUTDOWN)
|
service_status.set_status(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
server = MagicMock(spec=Server)
|
server = MagicMock(spec=Server)
|
||||||
server.user_id = 'test_user_id'
|
server.user_id = 'test_user_id'
|
||||||
|
|
||||||
@ -296,9 +302,9 @@ class TestNovaNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
self.addCleanup(self.do_cleanup, instance, service_status)
|
self.addCleanup(self.do_cleanup, instance, service_status)
|
||||||
|
|
||||||
def test_transformer_no_nova_instance(self):
|
def test_transformer_no_nova_instance(self):
|
||||||
status = rd_instance.ServiceStatuses.SHUTDOWN.api_status
|
status = srvstatus.ServiceStatuses.SHUTDOWN.api_status
|
||||||
instance, service_status = self.build_db_instance(status)
|
instance, service_status = self.build_db_instance(status)
|
||||||
service_status.set_status(rd_instance.ServiceStatuses.SHUTDOWN)
|
service_status.set_status(srvstatus.ServiceStatuses.SHUTDOWN)
|
||||||
mgmt_instance = mgmtmodels.SimpleMgmtInstance(self.context,
|
mgmt_instance = mgmtmodels.SimpleMgmtInstance(self.context,
|
||||||
instance,
|
instance,
|
||||||
None,
|
None,
|
||||||
@ -321,7 +327,7 @@ class TestNovaNotificationTransformer(MockMgmtInstanceTest):
|
|||||||
self.addCleanup(self.do_cleanup, instance, service_status)
|
self.addCleanup(self.do_cleanup, instance, service_status)
|
||||||
|
|
||||||
def test_transformer_flavor_cache(self):
|
def test_transformer_flavor_cache(self):
|
||||||
status = rd_instance.ServiceStatuses.BUILDING.api_status
|
status = srvstatus.ServiceStatuses.BUILDING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, InstanceTasks.BUILDING)
|
status, InstanceTasks.BUILDING)
|
||||||
|
|
||||||
@ -366,7 +372,7 @@ class TestMgmtInstanceTasks(MockMgmtInstanceTest):
|
|||||||
super(TestMgmtInstanceTasks, cls).setUpClass()
|
super(TestMgmtInstanceTasks, cls).setUpClass()
|
||||||
|
|
||||||
def test_public_exists_events(self):
|
def test_public_exists_events(self):
|
||||||
status = rd_instance.ServiceStatuses.BUILDING.api_status
|
status = srvstatus.ServiceStatuses.BUILDING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, task_status=InstanceTasks.BUILDING)
|
status, task_status=InstanceTasks.BUILDING)
|
||||||
server = MagicMock(spec=Server)
|
server = MagicMock(spec=Server)
|
||||||
@ -443,7 +449,7 @@ class TestMgmtInstanceDeleted(MockMgmtInstanceTest):
|
|||||||
class TestMgmtInstancePing(MockMgmtInstanceTest):
|
class TestMgmtInstancePing(MockMgmtInstanceTest):
|
||||||
|
|
||||||
def test_rpc_ping(self):
|
def test_rpc_ping(self):
|
||||||
status = rd_instance.ServiceStatuses.RUNNING.api_status
|
status = srvstatus.ServiceStatuses.RUNNING.api_status
|
||||||
instance, service_status = self.build_db_instance(
|
instance, service_status = self.build_db_instance(
|
||||||
status, task_status=InstanceTasks.NONE)
|
status, task_status=InstanceTasks.NONE)
|
||||||
mgmt_instance = mgmtmodels.MgmtInstance(instance,
|
mgmt_instance = mgmtmodels.MgmtInstance(instance,
|
||||||
|
@ -21,17 +21,16 @@ from mock import patch
|
|||||||
|
|
||||||
from trove.cluster.models import ClusterTasks as ClusterTaskStatus
|
from trove.cluster.models import ClusterTasks as ClusterTaskStatus
|
||||||
from trove.cluster.models import DBCluster
|
from trove.cluster.models import DBCluster
|
||||||
|
from trove.common import utils
|
||||||
from trove.common.strategies.cluster.experimental.mongodb.taskmanager import (
|
from trove.common.strategies.cluster.experimental.mongodb.taskmanager import (
|
||||||
MongoDbClusterTasks as ClusterTasks)
|
MongoDbClusterTasks as ClusterTasks)
|
||||||
from trove.common import utils
|
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
from trove.instance.models import BaseInstance
|
from trove.instance.models import BaseInstance
|
||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import Instance
|
from trove.instance.models import Instance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import InstanceTasks
|
from trove.instance.models import InstanceTasks
|
||||||
# from trove.taskmanager.models import BuiltInstanceTasks
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.taskmanager.models import ServiceStatuses
|
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
|
|
||||||
|
|
||||||
|
@ -29,7 +29,7 @@ from trove.instance.models import DBInstance
|
|||||||
from trove.instance.models import Instance
|
from trove.instance.models import Instance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import InstanceTasks
|
from trove.instance.models import InstanceTasks
|
||||||
from trove.taskmanager.models import ServiceStatuses
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
from trove.tests.unittests.util import util
|
from trove.tests.unittests.util import util
|
||||||
|
|
||||||
|
@ -16,29 +16,34 @@ from tempfile import NamedTemporaryFile
|
|||||||
from unittest import mock
|
from unittest import mock
|
||||||
|
|
||||||
from cinderclient import exceptions as cinder_exceptions
|
from cinderclient import exceptions as cinder_exceptions
|
||||||
import cinderclient.v2.client as cinderclient
|
|
||||||
from cinderclient.v2 import volumes as cinderclient_volumes
|
from cinderclient.v2 import volumes as cinderclient_volumes
|
||||||
from mock import Mock, MagicMock, patch, PropertyMock, call
|
import cinderclient.v2.client as cinderclient
|
||||||
|
from mock import call
|
||||||
|
from mock import MagicMock
|
||||||
|
from mock import Mock
|
||||||
|
from mock import patch
|
||||||
|
from mock import PropertyMock
|
||||||
import neutronclient.v2_0.client as neutronclient
|
import neutronclient.v2_0.client as neutronclient
|
||||||
from novaclient import exceptions as nova_exceptions
|
from novaclient import exceptions as nova_exceptions
|
||||||
import novaclient.v2.flavors
|
import novaclient.v2.flavors
|
||||||
import novaclient.v2.servers
|
import novaclient.v2.servers
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
from swiftclient.client import ClientException
|
from swiftclient.client import ClientException
|
||||||
from testtools.matchers import Equals, Is
|
from testtools.matchers import Equals
|
||||||
|
from testtools.matchers import Is
|
||||||
|
|
||||||
import trove.backup.models
|
from trove import rpc
|
||||||
from trove.backup import models as backup_models
|
from trove.backup import models as backup_models
|
||||||
from trove.backup import state
|
from trove.backup import state
|
||||||
|
import trove.backup.models
|
||||||
|
from trove.common import timeutils
|
||||||
|
from trove.common import utils
|
||||||
import trove.common.context
|
import trove.common.context
|
||||||
from trove.common.exception import GuestError
|
from trove.common.exception import GuestError
|
||||||
from trove.common.exception import PollTimeOut
|
from trove.common.exception import PollTimeOut
|
||||||
from trove.common.exception import TroveError
|
from trove.common.exception import TroveError
|
||||||
from trove.common.instance import ServiceStatuses
|
|
||||||
from trove.common.notification import TroveInstanceModifyVolume
|
from trove.common.notification import TroveInstanceModifyVolume
|
||||||
import trove.common.template as template
|
import trove.common.template as template
|
||||||
from trove.common import timeutils
|
|
||||||
from trove.common import utils
|
|
||||||
from trove.datastore import models as datastore_models
|
from trove.datastore import models as datastore_models
|
||||||
import trove.db.models
|
import trove.db.models
|
||||||
from trove.extensions.common import models as common_models
|
from trove.extensions.common import models as common_models
|
||||||
@ -48,8 +53,8 @@ from trove.instance.models import BaseInstance
|
|||||||
from trove.instance.models import DBInstance
|
from trove.instance.models import DBInstance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import InstanceStatus
|
from trove.instance.models import InstanceStatus
|
||||||
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.instance.tasks import InstanceTasks
|
from trove.instance.tasks import InstanceTasks
|
||||||
from trove import rpc
|
|
||||||
from trove.taskmanager import models as taskmanager_models
|
from trove.taskmanager import models as taskmanager_models
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
from trove.tests.unittests.util import util
|
from trove.tests.unittests.util import util
|
||||||
@ -962,27 +967,20 @@ class BuiltInstanceTasksTest(trove_testtools.TestCase):
|
|||||||
self.instance_task.demote_replication_master()
|
self.instance_task.demote_replication_master()
|
||||||
self.instance_task._guest.demote_replication_master.assert_any_call()
|
self.instance_task._guest.demote_replication_master.assert_any_call()
|
||||||
|
|
||||||
@patch.multiple(taskmanager_models.BuiltInstanceTasks,
|
@patch('trove.taskmanager.models.BuiltInstanceTasks.set_service_status')
|
||||||
get_injected_files=Mock(return_value="the-files"))
|
@patch('trove.taskmanager.models.BuiltInstanceTasks.is_service_healthy')
|
||||||
def test_upgrade(self, *args):
|
@patch('trove.taskmanager.models.BuiltInstanceTasks.reset_task_status')
|
||||||
pre_rebuild_server = self.instance_task.server
|
def test_upgrade(self, mock_resetstatus, mock_check, mock_setstatus):
|
||||||
dsv = Mock(image_id='foo_image')
|
dsv = MagicMock()
|
||||||
mock_volume = Mock(attachments=[{'device': '/dev/mock_dev'}])
|
attrs = {'name': 'new_version'}
|
||||||
with patch.object(self.instance_task._volume_client.volumes, "get",
|
dsv.configure_mock(**attrs)
|
||||||
Mock(return_value=mock_volume)):
|
mock_check.return_value = True
|
||||||
mock_server = Mock(status='ACTIVE')
|
self.instance_task._guest.pre_upgrade.return_value = {}
|
||||||
with patch.object(self.instance_task._nova_client.servers,
|
|
||||||
'get', Mock(return_value=mock_server)):
|
|
||||||
with patch.multiple(self.instance_task._guest,
|
|
||||||
pre_upgrade=Mock(return_value={}),
|
|
||||||
post_upgrade=Mock()):
|
|
||||||
self.instance_task.upgrade(dsv)
|
|
||||||
|
|
||||||
self.instance_task._guest.pre_upgrade.assert_called_with()
|
self.instance_task.upgrade(dsv)
|
||||||
pre_rebuild_server.rebuild.assert_called_with(
|
|
||||||
dsv.image_id, files="the-files")
|
self.instance_task._guest.upgrade.assert_called_once_with(
|
||||||
self.instance_task._guest.post_upgrade.assert_called_with(
|
{'datastore_version': 'new_version'})
|
||||||
mock_volume.attachments[0])
|
|
||||||
|
|
||||||
def test_fix_device_path(self):
|
def test_fix_device_path(self):
|
||||||
self.assertEqual("/dev/vdb", self.instance_task.
|
self.assertEqual("/dev/vdb", self.instance_task.
|
||||||
|
@ -16,6 +16,7 @@ import datetime
|
|||||||
from mock import Mock
|
from mock import Mock
|
||||||
from mock import patch
|
from mock import patch
|
||||||
|
|
||||||
|
from trove import rpc
|
||||||
from trove.cluster.models import ClusterTasks as ClusterTaskStatus
|
from trove.cluster.models import ClusterTasks as ClusterTaskStatus
|
||||||
from trove.cluster.models import DBCluster
|
from trove.cluster.models import DBCluster
|
||||||
import trove.common.context as context
|
import trove.common.context as context
|
||||||
@ -32,8 +33,7 @@ from trove.instance.models import DBInstance
|
|||||||
from trove.instance.models import Instance
|
from trove.instance.models import Instance
|
||||||
from trove.instance.models import InstanceServiceStatus
|
from trove.instance.models import InstanceServiceStatus
|
||||||
from trove.instance.models import InstanceTasks
|
from trove.instance.models import InstanceTasks
|
||||||
from trove import rpc
|
from trove.instance.service_status import ServiceStatuses
|
||||||
from trove.taskmanager.models import ServiceStatuses
|
|
||||||
from trove.tests.unittests import trove_testtools
|
from trove.tests.unittests import trove_testtools
|
||||||
|
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user