Add options to disable migration in host maintenance
This change enhances the Host Maintenance strategy by introducing two new input parameters: `disable_live_migration` and `disable_cold_migration`. These parameters allow cloud administrators to control whether live or cold migration should be considered during host maintenance operations. If `disable_live_migration` is set, active instances will be cold migrated if `disable_cold_migration` is not set, otherwise active instances will be stopped. If `disable_cold_migration` is set, inactive instances will not be cold migrated. If both are set, only stop actions will be performed on instances. The strategy logic and action plan generation have been updated to reflect these behaviors. A new "stop" action is introduced and registered, and the weight planner is updated to handle new action. Documentation for the Host Maintenance strategy is updated to describe the new parameters and their effects. Test Plan: - Unit tests for HostMaintenance strategy with new parameters - Integration tests for action plan generation with stop action This implements the specification: Spec: https://review.opendev.org/c/openstack/watcher-specs/+/943873 Change-Id: I201b8e5c52e1bc1a74f3886a0e301e3c0fa5d351 Signed-off-by: Quang Ngo <quang.ngo@canonical.com>
This commit is contained in:
@@ -52,15 +52,29 @@ Configuration
|
||||
|
||||
Strategy parameters are:
|
||||
|
||||
==================== ====== ================================ ==================
|
||||
parameter type description required/optional
|
||||
==================== ====== ================================ ==================
|
||||
``maintenance_node`` String The name of the compute node Required
|
||||
which needs maintenance.
|
||||
``backup_node`` String The name of the compute node Optional
|
||||
which will backup the
|
||||
maintenance node.
|
||||
==================== ====== ================================ ==================
|
||||
========================== ======== ========================== ==========
|
||||
parameter type description required
|
||||
========================== ======== ========================== ==========
|
||||
``maintenance_node`` String The name of the Required
|
||||
compute node
|
||||
which needs maintenance.
|
||||
``backup_node`` String The name of the compute Optional
|
||||
node which will backup
|
||||
the maintenance node.
|
||||
``disable_live_migration`` Boolean False: Active instances Optional
|
||||
will be live migrated.
|
||||
True: Active instances
|
||||
will be cold migrated
|
||||
if cold migration is
|
||||
not disabled. Otherwise,
|
||||
they will be stopped.
|
||||
False by default.
|
||||
``disable_cold_migration`` Boolean False: Inactive instances Optional
|
||||
will be cold migrated.
|
||||
True: Inactive instances
|
||||
will not be cold migrated.
|
||||
False by default.
|
||||
========================== ======== ========================== ==========
|
||||
|
||||
Efficacy Indicator
|
||||
------------------
|
||||
@@ -97,6 +111,18 @@ to compute02 host.
|
||||
-p maintenance_node=compute01 \
|
||||
-p backup_node=compute02
|
||||
|
||||
Run an audit using Host Maintenance strategy with migration disabled.
|
||||
This will only stop active instances on compute01, useful for maintenance
|
||||
scenarios where operators do not want to migrate workloads to other hosts.
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
$ openstack optimize audit create \
|
||||
-g cluster_maintaining -s host_maintenance \
|
||||
-p maintenance_node=compute01 \
|
||||
-p disable_live_migration=True \
|
||||
-p disable_cold_migration=True
|
||||
|
||||
Note that after executing this strategy, the *maintenance_node* will be
|
||||
marked as disabled, with the reason set to ``watcher_maintaining``.
|
||||
To enable the node again:
|
||||
|
@@ -0,0 +1,17 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
The Host Maintenance strategy now supports two new input parameters:
|
||||
``disable_live_migration`` and ``disable_cold_migration``. These
|
||||
parameters allow cloud administrators to control whether live, cold or
|
||||
no migration should be considered during host maintenance operations.
|
||||
|
||||
* If ``disable_live_migration`` is set, active instances will be cold
|
||||
migrated if ``disable_cold_migration`` is not set, otherwise active
|
||||
instances will be stopped.
|
||||
* If ``disable_cold_migration`` is set, inactive instances will not be
|
||||
cold migrated.
|
||||
* If both are set, only stop actions will be applied on active instances.
|
||||
|
||||
A new `stop` action has been introduced and registered to support
|
||||
scenarios where migration is disabled.
|
@@ -96,6 +96,7 @@ watcher_actions =
|
||||
resize = watcher.applier.actions.resize:Resize
|
||||
change_node_power_state = watcher.applier.actions.change_node_power_state:ChangeNodePowerState
|
||||
volume_migrate = watcher.applier.actions.volume_migration:VolumeMigrate
|
||||
stop = watcher.applier.actions.stop:Stop
|
||||
|
||||
watcher_workflow_engines =
|
||||
taskflow = watcher.applier.workflow_engine.default:DefaultWorkFlowEngine
|
||||
@@ -113,4 +114,4 @@ watcher_cluster_data_model_collectors =
|
||||
[codespell]
|
||||
skip = *.po,*.js,*.css,*.html,*.svg,HACKING.py,*hacking*,*build*,*_static*,doc/dictionary.txt,*.pyc,*.inv,*.gz,*.jpg,*.png,*.vsd,*.graffle,*.json
|
||||
count =
|
||||
quiet-level = 4
|
||||
quiet-level = 4
|
||||
|
169
watcher/applier/actions/stop.py
Normal file
169
watcher/applier/actions/stop.py
Normal file
@@ -0,0 +1,169 @@
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from oslo_log import log
|
||||
from watcher.applier.actions import base
|
||||
from watcher.common import nova_helper
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class Stop(base.BaseAction):
|
||||
"""Stops a server instance
|
||||
|
||||
This action will allow you to stop a server instance on a compute host.
|
||||
|
||||
The action schema is::
|
||||
|
||||
schema = Schema({
|
||||
'resource_id': str, # should be a UUID
|
||||
})
|
||||
|
||||
The `resource_id` is the UUID of the server instance to stop.
|
||||
The action will check if the instance exists, verify its current state,
|
||||
and then proceed to stop it if it is in a state that allows stopping.
|
||||
"""
|
||||
|
||||
@property
|
||||
def schema(self):
|
||||
return {
|
||||
'type': 'object',
|
||||
'properties': {
|
||||
'resource_id': {
|
||||
'type': 'string',
|
||||
"minlength": 1,
|
||||
"pattern": ("^([a-fA-F0-9]){8}-([a-fA-F0-9]){4}-"
|
||||
"([a-fA-F0-9]){4}-([a-fA-F0-9]){4}-"
|
||||
"([a-fA-F0-9]){12}$")
|
||||
},
|
||||
},
|
||||
'required': ['resource_id'],
|
||||
'additionalProperties': False,
|
||||
}
|
||||
|
||||
@property
|
||||
def instance_uuid(self):
|
||||
return self.resource_id
|
||||
|
||||
def stop(self):
|
||||
nova = nova_helper.NovaHelper(osc=self.osc)
|
||||
LOG.debug("Stopping instance %s", self.instance_uuid)
|
||||
|
||||
try:
|
||||
result = nova.stop_instance(instance_id=self.instance_uuid)
|
||||
except nova_helper.nvexceptions.ClientException as e:
|
||||
LOG.debug("Nova client exception occurred while stopping "
|
||||
"instance %(instance)s. Exception: %(exception)s",
|
||||
{'instance': self.instance_uuid, 'exception': e})
|
||||
return False
|
||||
except Exception as e:
|
||||
LOG.debug("An unexpected error occurred while stopping "
|
||||
"instance %s: %s", self.instance_uuid, str(e))
|
||||
return False
|
||||
|
||||
if result:
|
||||
LOG.debug(
|
||||
"Successfully stopped instance %(uuid)s",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
return True
|
||||
else:
|
||||
# Check if failure was due to instance not found (idempotent)
|
||||
instance = nova.find_instance(self.instance_uuid)
|
||||
if not instance:
|
||||
LOG.info(
|
||||
"Instance %(uuid)s not found, "
|
||||
"considering stop operation successful",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
return True
|
||||
else:
|
||||
LOG.error(
|
||||
"Failed to stop instance %(uuid)s",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
return False
|
||||
|
||||
def execute(self):
|
||||
return self.stop()
|
||||
|
||||
def _revert_stop(self):
|
||||
"""Revert the stop action by trying to start the instance"""
|
||||
nova = nova_helper.NovaHelper(osc=self.osc)
|
||||
LOG.debug("Starting instance %s", self.instance_uuid)
|
||||
|
||||
try:
|
||||
result = nova.start_instance(instance_id=self.instance_uuid)
|
||||
if result:
|
||||
LOG.debug(
|
||||
"Successfully reverted stop action and started instance "
|
||||
"%(uuid)s",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
return result
|
||||
else:
|
||||
LOG.info(
|
||||
"Failed to start instance %(uuid)s during revert. "
|
||||
"This may be normal for instances with special configs.",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
except Exception as exc:
|
||||
LOG.info(
|
||||
"Could not start instance %(uuid)s during revert: %(error)s. "
|
||||
"This may be normal for instances with special configs.",
|
||||
{'uuid': self.instance_uuid, 'error': str(exc)}
|
||||
)
|
||||
return False
|
||||
|
||||
def revert(self):
|
||||
LOG.debug("Reverting stop action for instance %s", self.instance_uuid)
|
||||
return self._revert_stop()
|
||||
|
||||
def abort(self):
|
||||
"""Abort the stop action - not applicable for stop operations"""
|
||||
LOG.info("Abort operation is not applicable for stop action on "
|
||||
" instance %s", self.instance_uuid)
|
||||
return False
|
||||
|
||||
def pre_condition(self):
|
||||
# Check for instance existence and its state
|
||||
nova = nova_helper.NovaHelper(osc=self.osc)
|
||||
try:
|
||||
instance = nova.find_instance(self.instance_uuid)
|
||||
if not instance:
|
||||
LOG.debug(
|
||||
"Instance %(uuid)s not found during pre-condition check. "
|
||||
"Considering this acceptable for stop operation.",
|
||||
{'uuid': self.instance_uuid}
|
||||
)
|
||||
return
|
||||
|
||||
# Log instance current state
|
||||
current_state = instance.status
|
||||
LOG.debug("Instance %s pre-condition check: state=%s",
|
||||
self.instance_uuid, current_state)
|
||||
|
||||
except Exception as exc:
|
||||
LOG.exception("Pre-condition check failed for instance %s: %s",
|
||||
self.instance_uuid, str(exc))
|
||||
raise
|
||||
|
||||
def post_condition(self):
|
||||
pass
|
||||
|
||||
def get_description(self):
|
||||
"""Description of the action"""
|
||||
return "Stop a VM instance"
|
@@ -556,6 +556,30 @@ class NovaHelper(object):
|
||||
else:
|
||||
return False
|
||||
|
||||
def start_instance(self, instance_id):
|
||||
"""This method starts a given instance.
|
||||
|
||||
:param instance_id: the unique id of the instance to start.
|
||||
"""
|
||||
LOG.debug("Trying to start instance %s ...", instance_id)
|
||||
|
||||
instance = self.find_instance(instance_id)
|
||||
|
||||
if not instance:
|
||||
LOG.debug("Instance not found: %s", instance_id)
|
||||
return False
|
||||
elif getattr(instance, 'OS-EXT-STS:vm_state') == "active":
|
||||
LOG.debug("Instance has already been started: %s", instance_id)
|
||||
return True
|
||||
else:
|
||||
self.nova.servers.start(instance_id)
|
||||
|
||||
if self.wait_for_instance_state(instance, "active", 8, 10):
|
||||
LOG.debug("Instance %s started.", instance_id)
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def wait_for_instance_state(self, server, state, retry, sleep):
|
||||
"""Waits for server to be in a specific state
|
||||
|
||||
|
@@ -50,6 +50,7 @@ class WeightPlanner(base.BasePlanner):
|
||||
'volume_migrate': 60,
|
||||
'change_nova_service_state': 50,
|
||||
'sleep': 40,
|
||||
'stop': 35,
|
||||
'migrate': 30,
|
||||
'resize': 20,
|
||||
'turn_host_to_acpi_s3_state': 10,
|
||||
@@ -59,6 +60,7 @@ class WeightPlanner(base.BasePlanner):
|
||||
parallelization = {
|
||||
'turn_host_to_acpi_s3_state': 2,
|
||||
'resize': 2,
|
||||
'stop': 2,
|
||||
'migrate': 2,
|
||||
'sleep': 1,
|
||||
'change_nova_service_state': 1,
|
||||
|
@@ -58,6 +58,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
|
||||
INSTANCE_MIGRATION = "migrate"
|
||||
CHANGE_NOVA_SERVICE_STATE = "change_nova_service_state"
|
||||
INSTANCE_STOP = "stop"
|
||||
|
||||
def __init__(self, config, osc=None):
|
||||
super(HostMaintenance, self).__init__(config, osc)
|
||||
@@ -88,6 +89,21 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
"will backup the maintenance node.",
|
||||
"type": "string",
|
||||
},
|
||||
"disable_live_migration": {
|
||||
"description": "Disable live migration in maintenance. "
|
||||
"If True, active instances will be cold "
|
||||
"migrated if `disable_cold_migration` is "
|
||||
"not set, otherwise they will be stopped.",
|
||||
"type": "boolean",
|
||||
"default": False,
|
||||
},
|
||||
"disable_cold_migration": {
|
||||
"description": "Disable cold migration in maintenance. "
|
||||
"If True, non-active instances will not be "
|
||||
"cold migrated.",
|
||||
"type": "boolean",
|
||||
"default": False,
|
||||
},
|
||||
},
|
||||
"required": ["maintenance_node"],
|
||||
}
|
||||
@@ -169,8 +185,17 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
if node_status_str != element.ServiceState.ENABLED.value:
|
||||
self.add_action_enable_compute_node(node)
|
||||
|
||||
def instance_migration(self, instance, src_node, des_node=None):
|
||||
"""Add an action for instance migration into the solution.
|
||||
def add_action_stop_instance(self, instance):
|
||||
"""Add an action for instance stop into the solution."""
|
||||
self.solution.add_action(
|
||||
action_type=self.INSTANCE_STOP,
|
||||
resource_id=instance.uuid)
|
||||
|
||||
def instance_handle(self, instance, src_node, des_node=None):
|
||||
"""Add an action for instance handling into the solution.
|
||||
|
||||
Depending on the configuration and instance state, this may stop the
|
||||
instance, live/cold migrate it, or do nothing.
|
||||
|
||||
:param instance: instance object
|
||||
:param src_node: node object
|
||||
@@ -179,9 +204,32 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
:return: None
|
||||
"""
|
||||
instance_state_str = self.get_instance_state_str(instance)
|
||||
disable_live_migration = self.input_parameters.get(
|
||||
'disable_live_migration', False)
|
||||
disable_cold_migration = self.input_parameters.get(
|
||||
'disable_cold_migration', False)
|
||||
|
||||
# Case 1: Both migrations disabled -> only stop active instance
|
||||
if disable_live_migration and disable_cold_migration:
|
||||
if instance_state_str == element.InstanceState.ACTIVE.value:
|
||||
self.add_action_stop_instance(instance)
|
||||
return
|
||||
|
||||
# Case 2: Handle instance based on state and migration options
|
||||
if instance_state_str == element.InstanceState.ACTIVE.value:
|
||||
migration_type = 'live'
|
||||
# For active instance
|
||||
if disable_live_migration:
|
||||
# Cold migrate active instance
|
||||
migration_type = 'cold'
|
||||
else:
|
||||
# Live migrate active instance when live migration is allowed
|
||||
migration_type = 'live'
|
||||
else:
|
||||
# For non-active instance
|
||||
if disable_cold_migration:
|
||||
# Non-active instance, cold migration disabled, do nothing
|
||||
return
|
||||
# Cold migrate non-active instance
|
||||
migration_type = 'cold'
|
||||
|
||||
params = {'migration_type': migration_type,
|
||||
@@ -202,7 +250,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
"""
|
||||
instances = self.compute_model.get_node_instances(source_node)
|
||||
for instance in instances:
|
||||
self.instance_migration(instance, source_node, destination_node)
|
||||
self.instance_handle(instance, source_node, destination_node)
|
||||
|
||||
def safe_maintain(self, maintenance_node, backup_node=None):
|
||||
"""safe maintain one compute node
|
||||
@@ -237,7 +285,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
self.add_action_maintain_compute_node(maintenance_node)
|
||||
instances = self.compute_model.get_node_instances(maintenance_node)
|
||||
for instance in instances:
|
||||
self.instance_migration(instance, maintenance_node)
|
||||
self.instance_handle(instance, maintenance_node)
|
||||
|
||||
def pre_execute(self):
|
||||
self._pre_execute()
|
||||
|
198
watcher/tests/applier/actions/test_stop.py
Normal file
198
watcher/tests/applier/actions/test_stop.py
Normal file
@@ -0,0 +1,198 @@
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from unittest import mock
|
||||
|
||||
import fixtures
|
||||
import jsonschema
|
||||
|
||||
from watcher.applier.actions import base as baction
|
||||
from watcher.applier.actions import stop
|
||||
from watcher.common import exception
|
||||
from watcher.tests import base
|
||||
|
||||
|
||||
class TestStop(base.TestCase):
|
||||
|
||||
INSTANCE_UUID = "45a37aeb-95ab-4ddb-a305-7d9f62c2f5ba"
|
||||
|
||||
def setUp(self):
|
||||
super(TestStop, self).setUp()
|
||||
|
||||
self.m_helper = self.useFixture(
|
||||
fixtures.MockPatch(
|
||||
"watcher.common.nova_helper.NovaHelper",
|
||||
autospec=False)).mock.return_value
|
||||
|
||||
self.input_parameters = {
|
||||
baction.BaseAction.RESOURCE_ID: self.INSTANCE_UUID,
|
||||
}
|
||||
self.action = stop.Stop(mock.Mock())
|
||||
self.action.input_parameters = self.input_parameters
|
||||
|
||||
def test_parameters(self):
|
||||
parameters = {baction.BaseAction.RESOURCE_ID: self.INSTANCE_UUID}
|
||||
self.action.input_parameters = parameters
|
||||
self.assertTrue(self.action.validate_parameters())
|
||||
|
||||
def test_parameters_exception_empty_resource_id(self):
|
||||
parameters = {baction.BaseAction.RESOURCE_ID: None}
|
||||
self.action.input_parameters = parameters
|
||||
self.assertRaises(jsonschema.ValidationError,
|
||||
self.action.validate_parameters)
|
||||
|
||||
def test_parameters_exception_invalid_uuid_format(self):
|
||||
parameters = {baction.BaseAction.RESOURCE_ID: "invalid-uuid"}
|
||||
self.action.input_parameters = parameters
|
||||
self.assertRaises(jsonschema.ValidationError,
|
||||
self.action.validate_parameters)
|
||||
|
||||
def test_parameters_exception_missing_resource_id(self):
|
||||
parameters = {}
|
||||
self.action.input_parameters = parameters
|
||||
self.assertRaises(jsonschema.ValidationError,
|
||||
self.action.validate_parameters)
|
||||
|
||||
def test_instance_uuid_property(self):
|
||||
self.assertEqual(self.INSTANCE_UUID, self.action.instance_uuid)
|
||||
|
||||
def test_pre_condition_instance_not_found(self):
|
||||
self.m_helper.find_instance.return_value = None
|
||||
|
||||
result = self.action.pre_condition()
|
||||
|
||||
# Instance not found can be considered acceptable (idempotent)
|
||||
self.assertIsNone(result)
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_pre_condition_instance_already_stopped(self):
|
||||
instance = mock.Mock()
|
||||
instance.status = 'stopped'
|
||||
self.m_helper.find_instance.return_value = instance
|
||||
|
||||
result = self.action.pre_condition()
|
||||
|
||||
# All valid states should return None (implicit success)
|
||||
self.assertIsNone(result)
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_pre_condition_instance_active(self):
|
||||
instance = mock.Mock()
|
||||
instance.status = 'active'
|
||||
self.m_helper.find_instance.return_value = instance
|
||||
|
||||
result = self.action.pre_condition()
|
||||
|
||||
# pre_condition returns None for active instances (implicit success)
|
||||
self.assertIsNone(result)
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_pre_condition_nova_exception(self):
|
||||
self.m_helper.find_instance.side_effect = Exception("Nova error")
|
||||
|
||||
self.assertRaises(Exception, self.action.pre_condition)
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_execute_success(self):
|
||||
self.m_helper.stop_instance.return_value = True
|
||||
|
||||
result = self.action.execute()
|
||||
|
||||
self.assertTrue(result)
|
||||
self.m_helper.stop_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
|
||||
def test_execute_stop_failure_instance_exists(self):
|
||||
# Instance exists but stop operation fails
|
||||
instance = mock.Mock()
|
||||
self.m_helper.find_instance.return_value = instance
|
||||
self.m_helper.stop_instance.return_value = False
|
||||
|
||||
result = self.action.execute()
|
||||
|
||||
# Should return False when stop fails and instance still exists
|
||||
self.assertFalse(result)
|
||||
self.m_helper.stop_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
# Should check instance existence after stop failure
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_execute_stop_failure_instance_not_found(self):
|
||||
# Stop operation fails but instance doesn't exist (idempotent)
|
||||
self.m_helper.find_instance.return_value = None
|
||||
self.m_helper.stop_instance.return_value = False
|
||||
|
||||
result = self.action.execute()
|
||||
|
||||
# Return True when stop fails but instance doesn't exist (idempotent)
|
||||
self.assertTrue(result)
|
||||
self.m_helper.stop_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
# Should check instance existence after stop failure
|
||||
self.m_helper.find_instance.assert_called_once_with(self.INSTANCE_UUID)
|
||||
|
||||
def test_execute_nova_exception(self):
|
||||
self.m_helper.stop_instance.side_effect = Exception("Stop failed")
|
||||
|
||||
result = self.action.execute()
|
||||
|
||||
# Execute should return False when Nova API fails, not raise exception
|
||||
self.assertFalse(result)
|
||||
self.m_helper.stop_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
|
||||
def test_revert_success(self):
|
||||
self.m_helper.start_instance.return_value = True
|
||||
|
||||
result = self.action.revert()
|
||||
|
||||
self.assertTrue(result)
|
||||
# revert method doesn't call find_instance - it directly tries to start
|
||||
self.m_helper.start_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
|
||||
def test_revert_instance_not_found(self):
|
||||
# The revert method doesn't check for instance existence,
|
||||
# it just tries to start and may fail gracefully
|
||||
self.m_helper.start_instance.side_effect = exception.InstanceNotFound(
|
||||
name=self.INSTANCE_UUID)
|
||||
|
||||
result = self.action.revert()
|
||||
|
||||
# Should return False when start fails due to instance not found
|
||||
self.assertFalse(result)
|
||||
self.m_helper.start_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
|
||||
def test_revert_start_failure(self):
|
||||
self.m_helper.start_instance.return_value = False
|
||||
|
||||
result = self.action.revert()
|
||||
|
||||
self.assertFalse(result)
|
||||
self.m_helper.start_instance.assert_called_once_with(
|
||||
instance_id=self.INSTANCE_UUID)
|
||||
|
||||
def test_revert_nova_exception(self):
|
||||
self.m_helper.start_instance.side_effect = Exception("Start failed")
|
||||
|
||||
result = self.action.revert()
|
||||
|
||||
# Should return False when start fails with exception
|
||||
self.assertFalse(result)
|
||||
|
||||
def test_get_description(self):
|
||||
expected = "Stop a VM instance"
|
||||
self.assertEqual(expected, self.action.get_description())
|
@@ -235,6 +235,52 @@ class TestNovaHelper(base.TestCase):
|
||||
result = nova_util.stop_instance(instance_id)
|
||||
self.assertFalse(result)
|
||||
|
||||
@mock.patch.object(time, 'sleep', mock.Mock())
|
||||
def test_start_instance(self, mock_glance, mock_cinder, mock_neutron,
|
||||
mock_nova):
|
||||
nova_util = nova_helper.NovaHelper()
|
||||
instance_id = utils.generate_uuid()
|
||||
server = self.fake_server(instance_id)
|
||||
setattr(server, 'OS-EXT-STS:vm_state', 'active')
|
||||
self.fake_nova_find_list(
|
||||
nova_util,
|
||||
fake_find=server,
|
||||
fake_list=server)
|
||||
|
||||
result = nova_util.start_instance(instance_id)
|
||||
self.assertTrue(result)
|
||||
|
||||
setattr(server, 'OS-EXT-STS:vm_state', 'stopped')
|
||||
result = nova_util.start_instance(instance_id)
|
||||
self.assertFalse(result)
|
||||
|
||||
self.fake_nova_find_list(nova_util, fake_find=server, fake_list=None)
|
||||
|
||||
result = nova_util.start_instance(instance_id)
|
||||
self.assertFalse(result)
|
||||
|
||||
# verify that the method will return True when the state of instance
|
||||
# is in the expected state.
|
||||
setattr(server, 'OS-EXT-STS:vm_state', 'stopped')
|
||||
with mock.patch.object(
|
||||
nova_util,
|
||||
'wait_for_instance_state',
|
||||
return_value=True
|
||||
) as mock_instance_state:
|
||||
result = nova_util.start_instance(instance_id)
|
||||
self.assertTrue(result)
|
||||
mock_instance_state.assert_called_once_with(
|
||||
mock.ANY,
|
||||
"active",
|
||||
8,
|
||||
10)
|
||||
|
||||
# verify that the method start_instance will return False when the
|
||||
# server is not available.
|
||||
nova_util.nova.servers.get.return_value = None
|
||||
result = nova_util.start_instance(instance_id)
|
||||
self.assertFalse(result)
|
||||
|
||||
@mock.patch.object(time, 'sleep', mock.Mock())
|
||||
def test_delete_instance(self, mock_glance, mock_cinder, mock_neutron,
|
||||
mock_nova):
|
||||
|
@@ -102,13 +102,13 @@ class TestHostMaintenance(TestBaseStrategy):
|
||||
'resource_name': 'hostname_0'}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_migration(self):
|
||||
def test_instance_handle(self):
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
self.strategy.instance_migration(instance_0, node_0, node_1)
|
||||
self.strategy.instance_handle(instance_0, node_0, node_1)
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {'destination_node': node_1.hostname,
|
||||
@@ -119,12 +119,12 @@ class TestHostMaintenance(TestBaseStrategy):
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_migration_without_dest_node(self):
|
||||
def test_instance_handle_without_dest_node(self):
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
self.strategy.instance_migration(instance_0, node_0)
|
||||
self.strategy.instance_handle(instance_0, node_0)
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {'source_node': node_0.uuid,
|
||||
@@ -225,3 +225,437 @@ class TestHostMaintenance(TestBaseStrategy):
|
||||
|
||||
result = self.strategy.post_execute()
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_schema_default_values(self):
|
||||
"""Test that disable_* parameters default to False when not provided"""
|
||||
parameters = {"maintenance_node": "hostname_0"}
|
||||
self.strategy.input_parameters = parameters
|
||||
|
||||
# Parameters should default to False when not provided
|
||||
self.assertFalse(self.strategy.input_parameters.get(
|
||||
'disable_live_migration', False))
|
||||
self.assertFalse(self.strategy.input_parameters.get(
|
||||
'disable_cold_migration', False))
|
||||
|
||||
def test_add_action_stop_instance(self):
|
||||
"""Test add_action_stop_instance method"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
|
||||
self.strategy.add_action_stop_instance(instance_0)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_0.uuid}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_both_migrations_disabled_active_instance(self):
|
||||
"""Test instance_handle with both migrations disabled on active"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_0, node_0)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_0.uuid}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_both_migrations_disabled_inactive_instance(self):
|
||||
"""Test instance_handle with both migrations disabled on inactive"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
instance_1.state = "stopped"
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_1, node_0)
|
||||
self.assertEqual(0, len(self.strategy.solution.actions))
|
||||
|
||||
def test_instance_handle_live_migration_disabled_active_instance(self):
|
||||
"""Test instance_handle with live migration disabled on active"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_0, node_0, node_1)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'cold',
|
||||
'resource_id': instance_0.uuid,
|
||||
'resource_name': instance_0.name
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_live_migration_disabled_inactive_instance(self):
|
||||
"""Test instance_handle with live migration disabled on inactive"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
instance_1.state = 'stopped'
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_1, node_0, node_1)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'cold',
|
||||
'resource_id': instance_1.uuid,
|
||||
'resource_name': instance_1.name
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_cold_migration_disabled_active_instance(self):
|
||||
"""Test instance_handle with cold migration disabled on active"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_0, node_0, node_1)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'live',
|
||||
'resource_id': instance_0.uuid,
|
||||
'resource_name': instance_0.name
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_cold_migration_disabled_inactive_instance(self):
|
||||
"""Test instance_handle with cold migration disabled on inactive"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
instance_1.state = 'stopped'
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_1, node_0, node_1)
|
||||
|
||||
# No actions should be generated
|
||||
self.assertEqual(0, len(self.strategy.solution.actions))
|
||||
|
||||
def test_instance_handle_no_migrations_disabled_active_instance(self):
|
||||
"""Test instance_handle with no migrations disabled on active"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_0, node_0, node_1)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'live',
|
||||
'resource_id': instance_0.uuid,
|
||||
'resource_name': instance_0.name
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_instance_handle_no_migrations_disabled_inactive_instance(self):
|
||||
"""Test instance_handle with no migrations disabled on inactive"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
instance_1.state = 'stopped'
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
}
|
||||
|
||||
self.strategy.instance_handle(instance_1, node_0, node_1)
|
||||
|
||||
self.assertEqual(1, len(self.strategy.solution.actions))
|
||||
expected = [{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'cold',
|
||||
'resource_id': instance_1.uuid,
|
||||
'resource_name': instance_1.name
|
||||
}}]
|
||||
self.assertEqual(expected, self.strategy.solution.actions)
|
||||
|
||||
def test_host_migration_with_both_migrations_disabled(self):
|
||||
"""Test host_migration with both migrations disabled"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.host_migration(node_0, node_1)
|
||||
|
||||
# Should generate stop actions for all instances
|
||||
self.assertEqual(2, len(self.strategy.solution.actions))
|
||||
expected_actions = [
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_0.uuid}},
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_1.uuid}}
|
||||
]
|
||||
for action in expected_actions:
|
||||
self.assertIn(action, self.strategy.solution.actions)
|
||||
|
||||
def test_host_migration_with_live_migration_disabled(self):
|
||||
"""Test host_migration with live migration disabled"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
node_1 = model.get_node_by_uuid('Node_1')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
}
|
||||
|
||||
self.strategy.host_migration(node_0, node_1)
|
||||
|
||||
# Should generate cold migrate actions for all instances
|
||||
self.assertEqual(2, len(self.strategy.solution.actions))
|
||||
expected_actions = [
|
||||
{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'cold',
|
||||
'resource_id': instance_0.uuid,
|
||||
'resource_name': instance_0.name
|
||||
}},
|
||||
{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'destination_node': node_1.hostname,
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'cold',
|
||||
'resource_id': instance_1.uuid,
|
||||
'resource_name': instance_1.name
|
||||
}}
|
||||
]
|
||||
for action in expected_actions:
|
||||
self.assertIn(action, self.strategy.solution.actions)
|
||||
|
||||
def test_safe_maintain_with_both_migrations_disabled(self):
|
||||
"""Test safe_maintain with both migrations disabled"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0') # maintenance node
|
||||
node_1 = model.get_node_by_uuid('Node_1') # backup node
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'backup_node': 'hostname_1',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
result = self.strategy.safe_maintain(node_0, node_1)
|
||||
|
||||
self.assertTrue(result)
|
||||
# Should have: maintain node + stop actions for all instances
|
||||
# (backup node is already enabled in scenario_1, so no enable action)
|
||||
self.assertEqual(3, len(self.strategy.solution.actions))
|
||||
|
||||
expected_actions = [
|
||||
{'action_type': 'change_nova_service_state',
|
||||
'input_parameters': {
|
||||
'resource_id': node_0.uuid,
|
||||
'resource_name': node_0.hostname,
|
||||
'state': 'disabled',
|
||||
'disabled_reason': 'watcher_maintaining'
|
||||
}},
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_0.uuid}},
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_1.uuid}}
|
||||
]
|
||||
for action in expected_actions:
|
||||
self.assertIn(action, self.strategy.solution.actions)
|
||||
|
||||
def test_try_maintain_with_both_migrations_disabled(self):
|
||||
"""Test try_maintain with both migrations disabled"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.try_maintain(node_0)
|
||||
|
||||
# Should have: maintain node + stop actions for all instances
|
||||
self.assertEqual(3, len(self.strategy.solution.actions))
|
||||
|
||||
expected_actions = [
|
||||
{'action_type': 'change_nova_service_state',
|
||||
'input_parameters': {
|
||||
'resource_id': node_0.uuid,
|
||||
'resource_name': node_0.hostname,
|
||||
'state': 'disabled',
|
||||
'disabled_reason': 'watcher_maintaining'
|
||||
}},
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_0.uuid}},
|
||||
{'action_type': 'stop', 'input_parameters': {
|
||||
'resource_id': instance_1.uuid}}
|
||||
]
|
||||
for action in expected_actions:
|
||||
self.assertIn(action, self.strategy.solution.actions)
|
||||
|
||||
def test_strategy_with_both_migrations_disabled(self):
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'backup_node': 'hostname_1',
|
||||
'disable_live_migration': True,
|
||||
'disable_cold_migration': True
|
||||
}
|
||||
|
||||
self.strategy.do_execute()
|
||||
|
||||
# Should have: maintain node + stop all instances
|
||||
self.assertEqual(3, len(self.strategy.solution.actions))
|
||||
|
||||
# Check that we have stop actions
|
||||
stop_actions = [action for action in self.strategy.solution.actions
|
||||
if action['action_type'] == 'stop']
|
||||
self.assertEqual(2, len(stop_actions))
|
||||
|
||||
def test_strategy_with_live_migration_disabled(self):
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
|
||||
self.strategy.input_parameters = {
|
||||
'maintenance_node': 'hostname_0',
|
||||
'backup_node': 'hostname_1',
|
||||
'disable_live_migration': True,
|
||||
}
|
||||
|
||||
self.strategy.do_execute()
|
||||
|
||||
# Should have: maintain node + cold migrate all instances
|
||||
self.assertEqual(3, len(self.strategy.solution.actions))
|
||||
|
||||
# Check that we have cold migrate actions
|
||||
cold_migrate_actions = [
|
||||
action for action in self.strategy.solution.actions
|
||||
if action['action_type'] == 'migrate' and
|
||||
action['input_parameters']['migration_type'] == 'cold'
|
||||
]
|
||||
self.assertEqual(2, len(cold_migrate_actions))
|
||||
|
||||
def test_backward_compatibility_without_new_parameters(self):
|
||||
"""Test that existing behavior is preserved when new params not used"""
|
||||
model = self.fake_c_cluster.generate_scenario_1()
|
||||
self.m_c_model.return_value = model
|
||||
node_0 = model.get_node_by_uuid('Node_0')
|
||||
instance_0 = model.get_instance_by_uuid("INSTANCE_0")
|
||||
instance_1 = model.get_instance_by_uuid("INSTANCE_1")
|
||||
|
||||
# Test without new parameters (should behave like original)
|
||||
self.strategy.input_parameters = {'maintenance_node': 'hostname_0'}
|
||||
self.strategy.do_execute()
|
||||
|
||||
# Should have: maintain node + migrate all instances
|
||||
self.assertEqual(3, len(self.strategy.solution.actions))
|
||||
|
||||
expected_actions = [
|
||||
{'action_type': 'change_nova_service_state',
|
||||
'input_parameters': {
|
||||
'resource_id': node_0.uuid,
|
||||
'resource_name': node_0.hostname,
|
||||
'state': 'disabled',
|
||||
'disabled_reason': 'watcher_maintaining'}},
|
||||
{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'live',
|
||||
'resource_id': instance_0.uuid,
|
||||
'resource_name': instance_0.name}},
|
||||
{'action_type': 'migrate',
|
||||
'input_parameters': {
|
||||
'source_node': node_0.uuid,
|
||||
'migration_type': 'live',
|
||||
'resource_id': instance_1.uuid,
|
||||
'resource_name': instance_1.name}}
|
||||
]
|
||||
|
||||
for action in expected_actions:
|
||||
self.assertIn(action, self.strategy.solution.actions)
|
||||
|
Reference in New Issue
Block a user