rally/tests/doc/test_task_samples.py

58 lines
2.2 KiB
Python
Raw Normal View History

New task config and verification refactoring *) Change task config format . Split "context" & "runner" stuff *) Refactor Verification . Move validation to context.base, runner.base and scenario.base . Validate whole config fully before starting any of tasks . Optimize scenario args validation (create only one time clients) . Optimize order of validation: 1) Validate names of benchmarks 2) Validate all static parameters, e.g. configuration of runner and context 3) If everything is ok in all benchmarks, then start validation of scenario args. . Store validation result (exception) in task["verification_log"] . Remove verification logic from BenchmarkEngine.__exit__ . Remove scenario args verification results from task["results"] *) Fix & Swtich to new format doc/samples/tasks . Switch to new fromat . Add missing task configratuion . Better formatting . json & yaml samples *) Refactored unit tests . tests.rally.benchmark.test_engine . tests.rally.benchmark.context.base . tests.orcestrator.test_api.start_task cover validation step as well and new change format *) Refactor orchestrator api start task . Remove benchmark engine context . Call verify explicity . Do not raise any excpetion in case of validation error . Catch in start task any unexcepted Exceptions a set deployment in incosistance state *) Refactor CLI . Properly handle new behaviour of verification . Replace table on task start to just message . Add HINTs to task detailed command *) Add unit test for checking doc samples *) Improve benchmark engine LOGing blueprint benchmark-new-task-config Change-Id: I23d3f6b3439fdb44946a7c2491d5a9b3559dc671
2014-03-12 00:01:40 +04:00
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import traceback
import yaml
from rally.benchmark.scenarios import base
New task config and verification refactoring *) Change task config format . Split "context" & "runner" stuff *) Refactor Verification . Move validation to context.base, runner.base and scenario.base . Validate whole config fully before starting any of tasks . Optimize scenario args validation (create only one time clients) . Optimize order of validation: 1) Validate names of benchmarks 2) Validate all static parameters, e.g. configuration of runner and context 3) If everything is ok in all benchmarks, then start validation of scenario args. . Store validation result (exception) in task["verification_log"] . Remove verification logic from BenchmarkEngine.__exit__ . Remove scenario args verification results from task["results"] *) Fix & Swtich to new format doc/samples/tasks . Switch to new fromat . Add missing task configratuion . Better formatting . json & yaml samples *) Refactored unit tests . tests.rally.benchmark.test_engine . tests.rally.benchmark.context.base . tests.orcestrator.test_api.start_task cover validation step as well and new change format *) Refactor orchestrator api start task . Remove benchmark engine context . Call verify explicity . Do not raise any excpetion in case of validation error . Catch in start task any unexcepted Exceptions a set deployment in incosistance state *) Refactor CLI . Properly handle new behaviour of verification . Replace table on task start to just message . Add HINTs to task detailed command *) Add unit test for checking doc samples *) Improve benchmark engine LOGing blueprint benchmark-new-task-config Change-Id: I23d3f6b3439fdb44946a7c2491d5a9b3559dc671
2014-03-12 00:01:40 +04:00
from rally.benchmark import engine
from tests import test
class TaskSampleTestCase(test.TestCase):
@mock.patch("rally.benchmark.engine.BenchmarkEngine"
"._validate_config_semantic")
def test_schema_is_valid(self, mock_semantic):
samples_path = os.path.join(os.path.dirname(__file__), "..", "..",
"doc", "samples", "tasks")
scenarios = set()
for dirname, dirnames, filenames in os.walk(samples_path):
for filename in filenames:
full_path = os.path.join(dirname, filename)
with open(full_path) as task_file:
try:
task_config = yaml.safe_load(task_file.read())
eng = engine.BenchmarkEngine(task_config,
mock.MagicMock())
eng.validate()
except Exception :
print(traceback.format_exc())
self.assertTrue(False,
"Wrong task config %s" % full_path)
else:
scenarios.update(task_config.keys())
# TODO(boris-42): We should refactor scenarios framework add "_" to
# all non-benchmark methods.. Then this test will pass.
missing = set(base.Scenario.list_benchmark_scenarios()) - scenarios
self.assertEqual(missing, set([]),
"These scenarios don't have samples: %s" % missing)