Make code blocks in documentation prettier

This commit tags code blocks with the appropriate language so that
Sphinx can add syntax highlighting and make them look nicer.

Change-Id: Id8f178110236b1d97015162b148f3e9127251a3a
This commit is contained in:
Chris St. Pierre 2015-09-24 09:16:19 -05:00
parent 7bb2904dec
commit aff628cb5f
17 changed files with 176 additions and 162 deletions

View File

@ -37,14 +37,14 @@ How to contribute
3. Tell git your details:
.. code-block:: none
.. code-block:: bash
git config --global user.name "Firstname Lastname"
git config --global user.email "your_email@youremail.com"
4. Install git-review. This tool takes a lot of the pain out of remembering commands to push code up to Gerrit for review and to pull it back down to edit it. It is installed using:
.. code-block:: none
.. code-block:: bash
pip install git-review
@ -52,13 +52,13 @@ Several Linux distributions (notably Fedora 16 and Ubuntu 12.04) are also starti
5. Grab the Rally repository:
.. code-block:: none
.. code-block:: bash
git clone git@github.com:openstack/rally.git
6. Checkout a new branch to hack on:
.. code-block:: none
.. code-block:: bash
git checkout -b TOPIC-BRANCH
@ -66,7 +66,7 @@ Several Linux distributions (notably Fedora 16 and Ubuntu 12.04) are also starti
8. Run the test suite locally to make sure nothing broke, e.g. (this will run py26/py27/pep8 tests):
.. code-block:: none
.. code-block:: bash
tox
@ -76,7 +76,7 @@ If you extend Rally with new functionality, make sure you have also provided uni
9. Commit your work using:
.. code-block:: none
.. code-block:: bash
git commit -a
@ -85,7 +85,7 @@ Make sure you have supplied your commit with a neat commit message, containing a
10. Push the commit up for code review using:
.. code-block:: none
.. code-block:: bash
git review -R
@ -120,24 +120,32 @@ About Rally unit tests:
- `Tox <https://tox.readthedocs.org/en/latest/>`_ is used to run unit tests
To run unit tests locally::
To run unit tests locally:
.. code-block:: console
$ pip install tox
$ tox
To run py26, py27 or pep8 only::
To run py26, py27 or pep8 only:
.. code-block:: console
$ tox -e <name>
#NOTE: <name> is one of py26, py27 or pep8
To get test coverage::
To get test coverage:
.. code-block:: console
$ tox -e cover
#NOTE: Results will be in /cover/index.html
To generate docs::
To generate docs:
.. code-block:: console
$ tox -e docs
@ -151,7 +159,9 @@ Functional tests
The goal of `functional tests <https://en.wikipedia.org/wiki/Functional_testing>`_ is to check that everything works well together.
Functional tests use Rally API only and check responses without touching internal parts.
To run functional tests locally::
To run functional tests locally:
.. code-block:: console
$ source openrc
$ rally deployment create --fromenv --name testing

View File

@ -129,7 +129,7 @@ Finally, add *gate-rally-dsvm-myscenario* to *zuul/layout.yaml*:
It is also possible to arrange your input task files as templates based on jinja2. Say, you want to set the image names used throughout the *myscenario.yaml* task file as a variable parameter. Then, replace concrete image names in this file with a variable:
.. parsed-literal::
.. code-block:: yaml
...
@ -149,7 +149,7 @@ It is also possible to arrange your input task files as templates based on jinja
and create a file named *myscenario_args.yaml* that will define the parameter values:
.. parsed-literal::
.. code-block:: yaml
---

View File

@ -24,7 +24,7 @@ Automated installation
The easiest way to install Rally is by executing its `installation script
<https://raw.githubusercontent.com/stackforge/rally/master/install_rally.sh>`_
.. code-block:: none
.. code-block:: bash
wget -q -O- https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh | bash
# or using curl
@ -39,14 +39,14 @@ By default it will install Rally in a virtualenv in ``~/rally`` when
run as standard user, or install system wide when run as root. You can
install Rally in a venv by using the option ``--target``:
.. code-block:: none
.. code-block:: bash
./install_rally.sh --target /foo/bar
You can also install Rally system wide by running script as root and
without ``--target`` option:
.. code-block:: none
.. code-block:: bash
sudo ./install_rally.sh
@ -54,7 +54,7 @@ without ``--target`` option:
Run ``./install_rally.sh`` with option ``--help`` to have a list of all
available options:
.. code-block:: node
.. code-block:: console
$ ./install_rally.sh --help
Usage: install_rally.sh [options]
@ -92,7 +92,7 @@ install the dependencies.
You also have to set up the **Rally database** after the installation is complete:
.. code-block:: none
.. code-block:: bash
rally-manage db recreate
@ -102,14 +102,14 @@ Rally with DevStack all-in-one installation
It is also possible to install Rally with DevStack. First, clone the corresponding repositories:
.. code-block:: none
.. code-block:: bash
git clone https://git.openstack.org/openstack-dev/devstack
git clone https://github.com/openstack/rally
Then, configure DevStack to run Rally:
.. code-block:: none
.. code-block:: bash
cd devstack
cp samples/local.conf local.conf
@ -117,7 +117,7 @@ Then, configure DevStack to run Rally:
Finally, run DevStack as usually:
.. code-block:: none
.. code-block:: bash
./stack.sh
@ -168,7 +168,9 @@ You may want to save the last command as an alias:
After executing ``dock_rally``, or ``docker run ...``, you will have
bash running inside the container with Rally installed. You may do
anything with Rally, but you need to create the database first::
anything with Rally, but you need to create the database first:
.. code-block:: console
user@box:~/rally$ dock_rally
rally@1cc98e0b5941:~$ rally-manage db recreate

View File

@ -32,36 +32,38 @@ User's view
From the user's point of view, Rally launches different benchmark scenarios while performing some benchmark task. **Benchmark task** is essentially a set of benchmark scenarios run against some OpenStack deployment in a specific (and customizable) manner by the CLI command:
**rally task start --task=<task_config.json>**
.. code-block:: bash
rally task start --task=<task_config.json>
Accordingly, the user may specify the names and parameters of benchmark scenarios to be run in **benchmark task configuration files**. A typical configuration file would have the following contents:
.. parsed-literal::
.. code-block:: json
{
**"NovaServers.boot_server"**: [
"NovaServers.boot_server": [
{
**"args": {**
**"flavor_id": 42,**
**"image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"**
**},**
"args": {
"flavor_id": 42,
"image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
},
"runner": {"times": 3},
"context": {...}
},
{
**"args": {**
**"flavor_id": 1,**
**"image_id": "3ba2b5f6-8d8d-4bbe-9ce5-4be01d912679"**
**},**
"args": {
"flavor_id": 1,
"image_id": "3ba2b5f6-8d8d-4bbe-9ce5-4be01d912679"
},
"runner": {"times": 3},
"context": {...}
}
],
**"CinderVolumes.create_volume"**: [
"CinderVolumes.create_volume": [
{
**"args": {**
**"size": 42**
**},**
"args": {
"size": 42
},
"runner": {"times": 3},
"context": {...}
}
@ -83,7 +85,7 @@ From the developer's perspective, a benchmark scenario is a method marked by a *
In a toy example below, we define a scenario class *MyScenario* with one benchmark scenario *MyScenario.scenario*. This benchmark scenario tests the performance of a sequence of 2 actions, implemented via private methods in the same class. Both methods are marked with the **@atomic_action_timer** decorator. This allows Rally to handle those actions in a special way and, after benchmarks complete, show runtime statistics not only for the whole scenarios, but for separate actions as well.
::
.. code-block:: python
from rally.task.scenarios import base
from rally.task import utils
@ -121,7 +123,7 @@ User's view
The user can specify which type of load on the cloud he would like to have through the **"runner"** section in the **task configuration file**:
.. parsed-literal::
.. code-block:: json
{
"NovaServers.boot_server": [
@ -130,11 +132,11 @@ The user can specify which type of load on the cloud he would like to have throu
"flavor_id": 42,
"image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
},
**"runner": {**
**"type": "constant",**
**"times": 15,**
**"concurrency": 2**
**},**
"runner": {
"type": "constant",
"times": 15,
"concurrency": 2
},
"context": {
"users": {
"tenants": 1,
@ -169,23 +171,23 @@ Developer's view
It is possible to extend Rally with new Scenario Runner types, if needed. Basically, each scenario runner should be implemented as a subclass of the base `ScenarioRunner <https://github.com/openstack/rally/blob/master/rally/benchmark/runner.py#L113>`_ class and located in the `rally.plugins.common.runners package <https://github.com/openstack/rally/tree/master/rally/plugins/common/runners>`_. The interface each scenario runner class should support is fairly easy:
.. parsed-literal::
.. code-block:: python
from rally.task import runner
from rally import consts
class MyScenarioRunner(runner.ScenarioRunner):
*"""My scenario runner."""*
"""My scenario runner."""
*# This string is what the user will have to specify in the task*
*# configuration file (in "runner": {"type": ...})*
# This string is what the user will have to specify in the task
# configuration file (in "runner": {"type": ...})
__execution_type__ = "my_scenario_runner"
*# CONFIG_SCHEMA is used to automatically validate the input*
*# config of the scenario runner, passed by the user in the task*
*# configuration file.*
# CONFIG_SCHEMA is used to automatically validate the input
# config of the scenario runner, passed by the user in the task
# configuration file.
CONFIG_SCHEMA = {
"type": "object",
@ -199,12 +201,12 @@ It is possible to extend Rally with new Scenario Runner types, if needed. Basica
}
def _run_scenario(self, cls, method_name, ctx, args):
*"""Run the scenario 'method_name' from scenario class 'cls'
"""Run the scenario 'method_name' from scenario class 'cls'
with arguments 'args', given a context 'ctx'.
This method should return the results dictionary wrapped in
a runner.ScenarioRunnerResult object (not plain JSON)
"""*
"""
results = ...
return runner.ScenarioRunnerResult(results)
@ -228,7 +230,7 @@ From the user's prospective, contexts in Rally are manageable via the **task con
In the example below, the **"users" context** specifies that the *"NovaServers.boot_server"* scenario should be run from **1 tenant** having **3 users** in it. Bearing in mind that the default quota for the number of instances is 10 instances per tenant, it is also reasonable to extend it to, say, **20 instances** in the **"quotas" context**. Otherwise the scenario would eventually fail, since it tries to boot a server 15 times from a single tenant.
.. parsed-literal::
.. code-block:: json
{
"NovaServers.boot_server": [
@ -242,17 +244,17 @@ In the example below, the **"users" context** specifies that the *"NovaServers.b
"times": 15,
"concurrency": 2
},
**"context": {**
**"users": {**
**"tenants": 1,**
**"users_per_tenant": 3**
**},**
**"quotas": {**
**"nova": {**
**"instances": 20**
**}**
**}**
**}**
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 3
},
"quotas": {
"nova": {
"instances": 20
}
}
}
}
]
}
@ -265,18 +267,18 @@ Developer's view
From the developer's view, contexts management is implemented via **Context classes**. Each context type that can be specified in the task configuration file corresponds to a certain subclass of the base [https://github.com/openstack/rally/blob/master/rally/benchmark/context.py **Context**] class. Every context class should implement a fairly simple **interface**:
.. parsed-literal::
.. code-block:: python
from rally.task import context
from rally import consts
@context.configure(name="your_context", *# Corresponds to the context field name in task configuration files*
order=100500, *# a number specifying the priority with which the context should be set up*
hidden=False) *# True if the context cannot be configured through the input task file*
@context.configure(name="your_context", # Corresponds to the context field name in task configuration files
order=100500, # a number specifying the priority with which the context should be set up
hidden=False) # True if the context cannot be configured through the input task file
class YourContext(context.Context):
*"""Yet another context class."""*
"""Yet another context class."""
*# The schema of the context configuration format*
# The schema of the context configuration format
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
@ -289,17 +291,17 @@ From the developer's view, contexts management is implemented via **Context clas
def __init__(self, context):
super(YourContext, self).__init__(context)
*# Initialize the necessary stuff*
# Initialize the necessary stuff
def setup(self):
*# Prepare the environment in the desired way*
# Prepare the environment in the desired way
def cleanup(self):
*# Cleanup the environment properly*
# Cleanup the environment properly
Consequently, the algorithm of initiating the contexts can be roughly seen as follows:
.. parsed-literal::
.. code-block:: python
context1 = Context1(ctx)
context2 = Context2(ctx)
@ -309,7 +311,7 @@ Consequently, the algorithm of initiating the contexts can be roughly seen as fo
context2.setup()
context3.setup()
*<Run benchmark scenarios in the prepared environment>*
<Run benchmark scenarios in the prepared environment>
context3.cleanup()
context2.cleanup()

View File

@ -70,7 +70,7 @@ Creation
Inherit a class for your plugin from the base *Scenario* class and implement a scenario method inside it as usual. In our scenario, let us first list flavors as an ordinary user, and then repeat the same using admin clients:
.. code-block:: none
.. code-block:: python
from rally.task import atomic
from rally.task import scenario
@ -104,7 +104,7 @@ Usage
You can refer to your plugin scenario in the benchmark task configuration files just in the same way as to any other scenarios:
.. code-block:: none
.. code-block:: json
{
"ScenarioPlugin.list_flavors": [
@ -135,7 +135,7 @@ Creation
Inherit a class for your plugin from the base *Context* class. Then, implement the Context API: the *setup()* method that creates a flavor and the *cleanup()* method that deletes it.
.. code-block:: none
.. code-block:: python
from rally.task import context
from rally.common import log as logging
@ -216,7 +216,7 @@ Usage
You can refer to your plugin context in the benchmark task configuration files just in the same way as to any other contexts:
.. code-block:: none
.. code-block:: json
{
"Dummy.dummy": [
@ -252,7 +252,7 @@ Creation
Inherit a class for your plugin from the base *SLA* class and implement its API (the *add_iteration(iteration)*, the *details()* method):
.. code-block:: none
.. code-block:: python
from rally.task import sla
from rally.common.i18n import _
@ -294,7 +294,7 @@ Usage
You can refer to your SLA in the benchmark task configuration files just in the same way as to any other SLA:
.. code-block:: none
.. code-block:: json
{
"Dummy.dummy": [
@ -331,7 +331,7 @@ Creation
Inherit a class for your plugin from the base *ScenarioRunner* class and implement its API (the *_run_scenario()* method):
.. code-block:: none
.. code-block:: python
import random
@ -383,7 +383,7 @@ Usage
You can refer to your scenario runner in the benchmark task configuration files just in the same way as to any other runners. Don't forget to put you runner-specific parameters to the configuration as well (*"min_times"* and *"max_times"* in our example):
.. code-block:: none
.. code-block:: json
{
"Dummy.dummy": [

View File

@ -21,7 +21,7 @@ Step 0. Installation
The easiest way to install Rally is by running its `installation script
<https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh>`_:
.. code-block:: none
.. code-block:: bash
wget -q -O- https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh | bash
# or using curl:

View File

@ -31,10 +31,10 @@ Registering an OpenStack deployment in Rally
First, you have to provide Rally with an OpenStack deployment it is going to benchmark. This should be done either through `OpenRC files <http://docs.openstack.org/user-guide/content/cli_openrc.html>`_ or through deployment `configuration files <https://github.com/openstack/rally/tree/master/samples/deployments>`_. In case you already have an *OpenRC*, it is extremely simple to register a deployment with the *deployment create* command:
.. code-block:: none
.. code-block:: console
$ . openrc admin admin
$ rally deployment create --fromenv --name=existing
$ rally deployment create --fromenv --name=existing
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
@ -45,7 +45,7 @@ First, you have to provide Rally with an OpenStack deployment it is going to ben
Alternatively, you can put the information about your cloud credentials into a JSON configuration file (let's call it `existing.json <https://github.com/openstack/rally/blob/master/samples/deployments/existing.json>`_). The *deployment create* command has a slightly different syntax in this case:
.. code-block:: none
.. code-block:: console
$ rally deployment create --file=existing.json --name=existing
+--------------------------------------+----------------------------+------------+------------------+--------+
@ -61,7 +61,7 @@ Note the last line in the output. It says that the just created deployment is no
Finally, the *deployment check* command enables you to verify that your current deployment is healthy and ready to be benchmarked:
.. code-block:: none
.. code-block:: console
$ rally deployment check
keystone endpoints are valid and following services are available:
@ -87,7 +87,7 @@ Benchmarking
Now that we have a working and registered deployment, we can start benchmarking it. The sequence of benchmarks to be launched by Rally should be specified in a *benchmark task configuration file* (either in *JSON* or in *YAML* format). Let's try one of the sample benchmark tasks available in `samples/tasks/scenarios <https://github.com/openstack/rally/tree/master/samples/tasks/scenarios>`_, say, the one that boots and deletes multiple servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
.. code-block:: none
.. code-block:: json
{
"NovaServers.boot_and_delete_server": [
@ -119,7 +119,7 @@ Now that we have a working and registered deployment, we can start benchmarking
To start a benchmark task, run the task start command (you can also add the *-v* option to print more logging information):
.. code-block:: none
.. code-block:: console
$ rally task start samples/tasks/scenarios/nova/boot-and-delete.json
--------------------------------------------------------------------------------
@ -180,7 +180,7 @@ To start a benchmark task, run the task start command (you can also add the *-v*
Note that the Rally input task above uses *regular expressions* to specify the image and flavor name to be used for server creation, since concrete names might differ from installation to installation. If this benchmark task fails, then the reason for that might a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the *rally show* command:
.. code-block:: none
.. code-block:: console
$ rally show images
+--------------------------------------+-----------------------+-----------+
@ -209,9 +209,9 @@ Report generation
One of the most beautiful things in Rally is its task report generation mechanism. It enables you to create illustrative and comprehensive HTML reports based on the benchmarking data. To create and open at once such a report for the last task you have launched, call:
.. code-block:: none
.. code-block:: bash
$ rally task report --out=report1.html --open
rally task report --out=report1.html --open
This will produce an HTML page with the overview of all the scenarios that you've included into the last benchmark task completed in Rally (in our case, this is just one scenario, and we will cover the topic of multiple scenarios in one task in :ref:`the next step of our tutorial <tutorial_step_2_input_task_format>`):

View File

@ -30,7 +30,7 @@ real-world cases you will use multiple plugins to test your OpenStack cloud.
Rally makes it very easy to run **different test cases defined in a single task**.
To do so, use the following syntax:
.. code-block:: none
.. code-block:: json
{
"<ScenarioName1>": [<benchmark_config>, <benchmark_config2>, ...]
@ -39,7 +39,7 @@ To do so, use the following syntax:
where *<benchmark_config>*, as before, is a dictionary:
.. code-block:: none
.. code-block:: json
{
"args": { <scenario-specific arguments> },
@ -55,7 +55,7 @@ As an example, let's edit our configuration file from :ref:`step 1 <tutorial_ste
*multiple-scenarios.json*
.. code-block:: none
.. code-block:: json
{
"NovaServers.boot_and_delete_server": [
@ -98,7 +98,7 @@ As an example, let's edit our configuration file from :ref:`step 1 <tutorial_ste
Now you can start this benchmark task as usually:
.. code-block:: none
.. code-block:: console
$ rally task start multiple-scenarios.json
...
@ -129,9 +129,9 @@ Now you can start this benchmark task as usually:
Note that the HTML reports you can generate by typing **rally task report --out=report_name.html** after your benchmark task has completed will get richer as your benchmark task configuration file includes more benchmark scenarios. Let's take a look at the report overview page for a task that covers all the scenarios available in Rally:
.. code-block:: none
.. code-block:: bash
$ rally task report --out=report_multiple_scenarios.html --open
rally task report --out=report_multiple_scenarios.html --open
.. image:: ../images/Report-Multiple-Overview.png
:align: center
@ -144,7 +144,7 @@ Yet another thing you can do in Rally is to launch **the same benchmark scenario
*multiple-configurations.json*
.. code-block:: none
.. code-block:: json
{
"NovaServers.boot_and_delete_server": [
@ -179,7 +179,7 @@ Yet another thing you can do in Rally is to launch **the same benchmark scenario
That's it! You will get again the results for each configuration separately:
.. code-block:: none
.. code-block:: console
$ rally task start --task=multiple-configurations.json
...
@ -209,9 +209,9 @@ That's it! You will get again the results for each configuration separately:
The HTML report will also look similar to what we have seen before:
.. code-block:: none
.. code-block:: bash
$ rally task report --out=report_multiple_configuraions.html --open
rally task report --out=report_multiple_configuraions.html --open
.. image:: ../images/Report-Multiple-Configurations-Overview.png
:align: center

View File

@ -36,7 +36,7 @@ Registering existing users in Rally
The information about existing users in your OpenStack cloud should be passed to Rally at the :ref:`deployment initialization step <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. You have to use the **ExistingCloud** deployment plugin that just provides Rally with credentials of an already existing cloud. The difference from the deployment configuration we've seen previously is that you should set up the *"users"* section with the credentials of already existing users. Let's call this deployment configuration file *existing_users.json*:
.. code-block:: none
.. code-block:: json
{
"type": "ExistingCloud",
@ -64,7 +64,7 @@ The information about existing users in your OpenStack cloud should be passed to
This deployment configuration requires some basic information about the OpenStack cloud like the region name, auth url. admin user credentials, and any amount of users already existing in the system. Rally will use their credentials to generate load in against this deployment as soon as we register it as usual:
.. code-block:: none
.. code-block:: console
$ rally deployment create --file existings_users --name our_cloud
+--------------------------------------+----------------------------+-----------+------------------+--------+
@ -78,7 +78,7 @@ This deployment configuration requires some basic information about the OpenStac
After that, the **rally show** command lists the resources for each user separately:
.. code-block:: none
.. code-block:: console
$ rally show images
@ -121,7 +121,7 @@ Running benchmark scenarios with existing users
After you have registered a deployment with existing users, don't forget to remove the *"users"* context from your benchmark task configuration if you want to use existing users, like in the following configuration file (*boot-and-delete.json*):
.. code-block:: none
.. code-block:: json
{
"NovaServers.boot_and_delete_server": [
@ -147,10 +147,9 @@ After you have registered a deployment with existing users, don't forget to remo
When you start this task, it will use the existing users *"b1"* and *"b2"* instead of creating the temporary ones:
.. code-block:: none
.. code-block:: bash
$ rally task start samples/tasks/scenarios/nova/boot-and-delete.json
...
rally task start samples/tasks/scenarios/nova/boot-and-delete.json
It goes without saying that support of benchmarking with predefined users simplifies the usage of Rally for generating loads against production clouds.

View File

@ -28,7 +28,7 @@ Rally allows you to set success criteria (also called *SLA - Service-Level Agree
To configure the SLA, add the *"sla"* section to the configuration of the corresponding benchmark (the check name is a key associated with its target value). You can combine different success criteria:
.. code-block:: none
.. code-block:: json
{
"NovaServers.boot_and_delete_server": [
@ -59,7 +59,7 @@ Checking SLA
------------
Let us show you how Rally SLA work using a simple example based on **Dummy benchmark scenarios**. These scenarios actually do not perform any OpenStack-related stuff but are very useful for testing the behaviors of Rally. Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing and another that just throws an exception:
.. code-block:: none
.. code-block:: json
{
"Dummy.dummy": [
@ -105,14 +105,13 @@ Let us show you how Rally SLA work using a simple example based on **Dummy bench
Note that both scenarios in these tasks have the **maximum failure rate of 0%** as their **success criterion**. We expect that the first scenario will pass this criterion while the second will fail it. Let's start the task:
.. code-block:: none
.. code-block:: bash
$ rally task start test-sla.json
...
rally task start test-sla.json
After the task completes, run *rally task sla_check* to check the results again the success criteria you defined in the task:
.. code-block:: none
.. code-block:: console
$ rally task sla_check
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
@ -130,9 +129,9 @@ SLA in task report
SLA checks are nicely visualized in task reports. Generate one:
.. code-block:: none
.. code-block:: bash
$ rally task report --out=report_sla.html --open
rally task report --out=report_sla.html --open
Benchmark scenarios that have passed SLA have a green check on the overview page:

View File

@ -26,10 +26,9 @@ Basic template syntax
A nice feature of the input task format used in Rally is that it supports the **template syntax** based on `Jinja2 <https://pypi.python.org/pypi/Jinja2>`_. This turns out to be extremely useful when, say, you have a fixed structure of your task but you want to parameterize this task in some way. For example, imagine your input task file (*task.yaml*) runs a set of Nova scenarios:
.. code-block:: none
.. code-block:: yaml
---
NovaServers.boot_and_delete_server:
-
args:
@ -66,10 +65,9 @@ A nice feature of the input task format used in Rally is that it supports the **
In all the three scenarios above, the *"^cirros.*uec$"* image is passed to the scenario as an argument (so that these scenarios use an appropriate image while booting servers). Lets say you want to run the same set of scenarios with the same runner/context/sla, but you want to try another image while booting server to compare the performance. The most elegant solution is then to turn the image name into a template variable:
.. code-block:: none
.. code-block:: yaml
---
NovaServers.boot_and_delete_server:
-
args:
@ -109,23 +107,23 @@ and then pass the argument value for **{{image_name}}** when starting a task wit
1. Pass the argument values directly in the command-line interface (with either a JSON or YAML dictionary):
.. code-block:: none
.. code-block:: bash
$ rally task start task.yaml --task-args '{"image_name": "^cirros.*uec$"}'
$ rally task start task.yaml --task-args 'image_name: "^cirros.*uec$"'
rally task start task.yaml --task-args '{"image_name": "^cirros.*uec$"}'
rally task start task.yaml --task-args 'image_name: "^cirros.*uec$"'
2. Refer to a file that specifies the argument values (JSON/YAML):
.. code-block:: none
.. code-block:: bash
$ rally task start task.yaml --task-args-file args.json
$ rally task start task.yaml --task-args-file args.yaml
rally task start task.yaml --task-args-file args.json
rally task start task.yaml --task-args-file args.yaml
where the files containing argument values should look as follows:
*args.json*:
.. code-block:: none
.. code-block:: json
{
"image_name": "^cirros.*uec$"
@ -133,15 +131,14 @@ where the files containing argument values should look as follows:
*args.yaml*:
.. code-block:: none
.. code-block:: yaml
---
image_name: "^cirros.*uec$"
Passed in either way, these parameter values will be substituted by Rally when starting a task:
.. code-block:: none
.. code-block:: console
$ rally task start task.yaml --task-args "image_name: "^cirros.*uec$""
--------------------------------------------------------------------------------
@ -197,7 +194,7 @@ Using the default values
Note that the Jinja2 template syntax allows you to set the default values for your parameters. With default values set, your task file will work even if you don't parameterize it explicitly while starting a task. The default values should be set using the *{% set ... %}* clause (*task.yaml*):
.. code-block:: none
.. code-block:: yaml
{% set image_name = image_name or "^cirros.*uec$" %}
---
@ -222,7 +219,7 @@ Note that the Jinja2 template syntax allows you to set the default values for yo
If you don't pass the value for *{{image_name}}* while starting a task, the default one will be used:
.. code-block:: none
.. code-block:: console
$ rally task start task.yaml
--------------------------------------------------------------------------------
@ -259,10 +256,9 @@ Rally makes it possible to use all the power of Jinja2 template syntax, includin
As an example, let us make up a task file that will create new users with increasing concurrency. The input task file (*task.yaml*) below uses the Jinja2 **for-endfor** construct to accomplish that:
.. code-block:: none
.. code-block:: yaml
---
KeystoneBasic.create_user:
{% for i in range(2, 11, 2) %}
-
@ -280,7 +276,7 @@ As an example, let us make up a task file that will create new users with increa
In this case, you dont need to pass any arguments via *--task-args/--task-args-file*, but as soon as you start this task, Rally will automatically unfold the for-loop for you:
.. code-block:: none
.. code-block:: console
$ rally task start task.yaml
--------------------------------------------------------------------------------

View File

@ -24,7 +24,7 @@ With the **"stop on SLA failure"** feature, however, things are much better.
This feature can be easily tested in real life by running one of the most important and plain benchmark scenario called *"KeystoneBasic.authenticate"*. This scenario just tries to authenticate from users that were pre-created by Rally. Rally input task looks as follows (*auth.yaml*):
.. code-block:: none
.. code-block:: yaml
---
Authenticate.keystone:
@ -46,7 +46,7 @@ In human-readable form this input task means: *Create 5 tenants with 10 users in
Lets run Rally task with **an argument that prescribes Rally to stop load on SLA failure**:
.. code-block:: none
.. code-block:: console
$ rally task start --abort-on-sla-failure auth.yaml
@ -64,16 +64,16 @@ On the resulting table there are 2 interesting things:
To understand better what has happened lets generate HTML report:
.. code-block:: none
.. code-block:: bash
$ rally task report --out auth_report.html
rally task report --out auth_report.html
.. image:: ../images/Report-Abort-on-SLA-task-1.png
:align: center
On the chart with durations we can observe that the duration of authentication request reaches 65 seconds at the end of the load generation. **Rally stopped load at the very last moment just before the mad things happened. The reason why it runs so many attempts to authenticate is because of not enough good success criteria.** We had to run a lot of iterations to make average duration bigger than 5 seconds. Lets chose better success criteria for this task and run it one more time.
.. code-block:: none
.. code-block:: yaml
---
Authenticate.keystone:
@ -100,7 +100,7 @@ Now our task is going to be successful if the following three conditions hold:
Lets run it!
.. code-block:: none
.. code-block:: console
$ rally task start --abort-on-sla-failure auth.yaml

View File

@ -20,7 +20,7 @@ Step 7. Working with multiple OpenStack clouds
Rally is an awesome tool that allows you to work with multiple clouds and can itself deploy them. We already know how to work with :ref:`a single cloud <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. Let us now register 2 clouds in Rally: the one that we have access to and the other that we know is registered with wrong credentials.
.. code-block:: none
.. code-block:: console
$ . openrc admin admin # openrc with correct credentials
$ rally deployment create --fromenv --name=cloud-1
@ -46,7 +46,7 @@ Rally is an awesome tool that allows you to work with multiple clouds and can it
Let us now list the deployments we have created:
.. code-block:: none
.. code-block:: console
$ rally deployment list
+--------------------------------------+----------------------------+------------+------------------+--------+
@ -58,7 +58,7 @@ Let us now list the deployments we have created:
Note that the second is marked as **"active"** because this is the deployment we have created most recently. This means that it will be automatically (unless its UUID or name is passed explicitly via the *--deployment* parameter) used by the commands that need a deployment, like *rally task start ...* or *rally deployment check*:
.. code-block:: none
.. code-block:: console
$ rally deployment check
Authentication Issues: wrong keystone credentials specified in your endpoint properties. (HTTP 401).
@ -82,7 +82,7 @@ Note that the second is marked as **"active"** because this is the deployment we
You can also switch the active deployment using the **rally deployment use** command:
.. code-block:: none
.. code-block:: console
$ rally deployment use cloud-1
Using deployment: 658b9bae-1f9c-4036-9400-9e71e88864fc
@ -110,7 +110,7 @@ Note the first two lines of the CLI output for the *rally deployment use* comman
One last detail about managing different deployments in Rally is that the *rally task list* command outputs only those tasks that were run against the currently active deployment, and you have to provide the *--all-deployments* parameter to list all the tasks:
.. code-block:: none
.. code-block:: console
$ rally task list
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+

View File

@ -41,7 +41,7 @@ Rally plugin CLI command is much more convenient way to learn about different
plugins in Rally. This command allows to list plugins and show detailed
information about them:
.. code-block:: none
.. code-block:: console
$ rally plugin show create_meter_and_get_stats
@ -65,7 +65,7 @@ information about them:
In case if multiple found benchmarks found command list all matches elements:
.. code-block:: none
.. code-block:: console
$ rally plugin show NovaKeypair
@ -84,9 +84,9 @@ CLI: rally plugin list
This command can be used to list filtered by name list of plugins.
.. code-block:: none
.. code-block:: console
rally plugin list --name Keystone
$ rally plugin list --name Keystone
+--------------------------------------------------+-----------+-----------------------------------------------------------------+
| name | namespace | title |

View File

@ -20,7 +20,7 @@ Step 9. Deploying OpenStack from Rally
Along with supporting already existing OpenStack deployments, Rally itself can **deploy OpenStack automatically** by using one of its *deployment engines*. Take a look at other `deployment configuration file samples <https://github.com/openstack/rally/tree/master/samples/deployments>`_. For example, *devstack-in-existing-servers.json* is a deployment configuration file that tells Rally to deploy OpenStack with **Devstack** on the existing servers with given credentials:
.. code-block:: none
.. code-block:: json
{
"type": "DevstackEngine",
@ -32,7 +32,7 @@ Along with supporting already existing OpenStack deployments, Rally itself can *
You can try to deploy OpenStack in your Virtual Machine using this script. Edit the configuration file with your IP address/user name and run, as usual:
.. code-block:: none
.. code-block:: console
$ rally deployment create --file=samples/deployments/for_deploying_openstack_with_rally/devstack-in-existing-servers.json --name=new-devstack
+---------------------------+----------------------------+--------------+------------------+

View File

@ -45,7 +45,7 @@ Results
1. Concurrency = 4
.. code-block:: none
.. code-block:: json
{'context': {'users': {'concurrent': 30,
'tenants': 12,
@ -66,7 +66,7 @@ Results
2. Concurrency = 16
.. code-block:: none
.. code-block:: json
{'context': {'users': {'concurrent': 30,
'tenants': 12,
@ -86,7 +86,7 @@ Results
3. Concurrency = 32
.. code-block:: none
.. code-block:: json
{'context': {'users': {'concurrent': 30,
'tenants': 12,

View File

@ -69,7 +69,9 @@ https://review.openstack.org/#/c/96300/
Rally was deployed for cluster using `ExistingCloud <https://github.com/openstack/rally/blob/master/samples/deployments/existing.json>`_ type of deployment.
**Server flavor** ::
**Server flavor**
.. code-block:: console
$ nova flavor-show ram64
+----------------------------+--------------------------------------+
@ -88,7 +90,9 @@ Rally was deployed for cluster using `ExistingCloud <https://github.com/openstac
| vcpus | 1 |
+----------------------------+--------------------------------------+
**Server image** ::
**Server image**
.. code-block:: console
$ nova image-show TestVM
+----------------------------+-------------------------------------------------+
@ -107,7 +111,9 @@ Rally was deployed for cluster using `ExistingCloud <https://github.com/openstac
+----------------------------+-------------------------------------------------+
**Task configuration file (in JSON format):** ::
**Task configuration file (in JSON format):**
.. code-block:: json
{
"NovaServers.boot_server": [