[docs] apply sphinx-lint to docs

This change corrects the detected sphinx-linit issue in the existing
docs and updates the contributor devstack guide to call out
required and advanced.

mostly the changes were simple fixes like replacing the configurable
default rule with explict literal syntax `term` -> ``term``

some inline Note: comments have been promoted to .. note:: blocks
and literal blocks ::  have been promoted to .. code-block:: <language>
directives.

Change-Id: I6320c313d22bf542ad407169e6538dc6acf79901
This commit is contained in:
Sean Mooney 2024-11-08 01:37:58 +00:00
parent 5fadd0de57
commit 1f8d06e075
9 changed files with 215 additions and 157 deletions

View File

@ -44,12 +44,14 @@ repos:
hooks: hooks:
- id: codespell - id: codespell
args: ['--ignore-words=doc/dictionary.txt'] args: ['--ignore-words=doc/dictionary.txt']
# FIXME(sean-k-mooney): we have many sphinx issues fix them - repo: https://github.com/sphinx-contrib/sphinx-lint
# in a separate commit to make it easier to review rev: v1.0.0
# - repo: https://github.com/sphinx-contrib/sphinx-lint hooks:
# rev: v1.0.0 - id: sphinx-lint
# hooks: args: [--enable=default-role]
# - id: sphinx-lint files: ^doc/|releasenotes|api-guide
# args: [--enable=default-role] types: [rst]
# files: ^doc/|releasenotes|api-guide - repo: https://github.com/PyCQA/doc8
# types: [rst] rev: v1.1.2
hooks:
- id: doc8

View File

@ -1,7 +1,10 @@
openstackdocstheme>=2.2.1 # Apache-2.0 sphinx>=2.1.1 # BSD
sphinx>=2.0.0,!=2.1.0 # BSD
sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD
reno>=3.1.0 # Apache-2.0 sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0
sphinxcontrib-apidoc>=0.2.0 # BSD sphinxcontrib-apidoc>=0.2.0 # BSD
# openstack
os-api-ref>=1.4.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0
openstackdocstheme>=2.2.1 # Apache-2.0
# releasenotes
reno>=3.1.0 # Apache-2.0

View File

@ -285,7 +285,7 @@ Audit and interval (in case of CONTINUOUS type). There is three types of Audit:
ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it
succeeded executed new action plan list will be provided; CONTINUOUS Audit succeeded executed new action plan list will be provided; CONTINUOUS Audit
creates action plans with specified interval (in seconds or cron format, cron creates action plans with specified interval (in seconds or cron format, cron
interval can be used like: `*/5 * * * *`), if action plan interval can be used like: ``*/5 * * * *``), if action plan
has been created, all previous action plans get CANCELLED state; has been created, all previous action plans get CANCELLED state;
EVENT audit is launched when receiving webhooks API. EVENT audit is launched when receiving webhooks API.

View File

@ -16,7 +16,7 @@ multinode environment to use.
You can set up the Watcher services quickly and easily using a Watcher You can set up the Watcher services quickly and easily using a Watcher
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
model. To enable the Watcher plugin with DevStack, add the following to the model. To enable the Watcher plugin with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the ``[[local|localrc]]`` section of your controller's ``local.conf`` to enable the
Watcher plugin:: Watcher plugin::
enable_plugin watcher https://opendev.org/openstack/watcher enable_plugin watcher https://opendev.org/openstack/watcher
@ -32,7 +32,7 @@ Quick Devstack Instructions with Datasources
Watcher requires a datasource to collect metrics from compute nodes and Watcher requires a datasource to collect metrics from compute nodes and
instances in order to execute most strategies. To enable this a instances in order to execute most strategies. To enable this a
`[[local|localrc]]` to setup DevStack for some of the supported datasources ``[[local|localrc]]`` to setup DevStack for some of the supported datasources
is provided. These examples specify the minimal configuration parameters to is provided. These examples specify the minimal configuration parameters to
get both Watcher and the datasource working but can be expanded is desired. get both Watcher and the datasource working but can be expanded is desired.
@ -41,54 +41,60 @@ Gnocchi
With the Gnocchi datasource most of the metrics for compute nodes and With the Gnocchi datasource most of the metrics for compute nodes and
instances will work with the provided configuration but metrics that instances will work with the provided configuration but metrics that
require Ironic such as `host_airflow and` `host_power` will still be require Ironic such as ``host_airflow and`` ``host_power`` will still be
unavailable as well as `instance_l3_cpu_cache`:: unavailable as well as ``instance_l3_cpu_cache``
[[local|localrc]] .. code-block:: ini
enable_plugin watcher https://opendev.org/openstack/watcher
enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard [[local|localrc]]
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git enable_plugin watcher https://opendev.org/openstack/watcher
CEILOMETER_BACKEND=gnocchi enable_plugin watcher-dashboard https://opendev.org/openstack/watcher-dashboard
enable_plugin ceilometer https://opendev.org/openstack/ceilometer.git
enable_plugin aodh https://opendev.org/openstack/aodh
enable_plugin panko https://opendev.org/openstack/panko
enable_plugin aodh https://opendev.org/openstack/aodh CEILOMETER_BACKEND=gnocchi
enable_plugin panko https://opendev.org/openstack/panko [[post-config|$NOVA_CONF]]
[DEFAULT]
[[post-config|$NOVA_CONF]] compute_monitors=cpu.virt_driver
[DEFAULT]
compute_monitors=cpu.virt_driver
Detailed DevStack Instructions Detailed DevStack Instructions
============================== ==============================
#. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack). #. Obtain N (where N >= 1) servers (virtual machines preferred for DevStack).
One of these servers will be the controller node while the others will be One of these servers will be the controller node while the others will be
compute nodes. N is preferably >= 3 so that you have at least 2 compute compute nodes. N is preferably >= 3 so that you have at least 2 compute
nodes, but in order to stand up the Watcher services only 1 server is nodes, but in order to stand up the Watcher services only 1 server is
needed (i.e., no computes are needed if you want to just experiment with needed (i.e., no computes are needed if you want to just experiment with
the Watcher services). These servers can be VMs running on your local the Watcher services). These servers can be VMs running on your local
machine via VirtualBox if you prefer. DevStack currently recommends that machine via VirtualBox if you prefer. DevStack currently recommends that
you use Ubuntu 16.04 LTS. The servers should also have connections to the you use Ubuntu 16.04 LTS. The servers should also have connections to the
same network such that they are all able to communicate with one another. same network such that they are all able to communicate with one another.
#. For each server, clone the DevStack repository and create the stack user:: #. For each server, clone the DevStack repository and create the stack user
sudo apt-get update .. code-block:: bash
sudo apt-get install git
git clone https://opendev.org/openstack/devstack.git sudo apt-get update
sudo ./devstack/tools/create-stack-user.sh sudo apt-get install git
git clone https://opendev.org/openstack/devstack.git
sudo ./devstack/tools/create-stack-user.sh
Now you have a stack user that is used to run the DevStack processes. You Now you have a stack user that is used to run the DevStack processes. You
may want to give your stack user a password to allow SSH via a password:: may want to give your stack user a password to allow SSH via a password
sudo passwd stack .. code-block:: bash
#. Switch to the stack user and clone the DevStack repo again:: sudo passwd stack
sudo su stack #. Switch to the stack user and clone the DevStack repo again
cd ~
git clone https://opendev.org/openstack/devstack.git .. code-block:: bash
sudo su stack
cd ~
git clone https://opendev.org/openstack/devstack.git
#. For each compute node, copy the provided `local.conf.compute`_ example file #. For each compute node, copy the provided `local.conf.compute`_ example file
to the compute node's system at ~/devstack/local.conf. Make sure the to the compute node's system at ~/devstack/local.conf. Make sure the
@ -111,24 +117,30 @@ Detailed DevStack Instructions
the HOST_IP value is changed appropriately - i.e., HOST_IP is set to the IP the HOST_IP value is changed appropriately - i.e., HOST_IP is set to the IP
address of the controller node. address of the controller node.
Note: if you want to use another Watcher git repository (such as a local .. NOTE::
one), then change the enable plugin line:: if you want to use another Watcher git repository (such as a local
one), then change the enable plugin line
.. code-block:: bash
enable_plugin watcher <your_local_git_repo> [optional_branch]
enable_plugin watcher <your_local_git_repo> [optional_branch]
If you do this, then the Watcher DevStack plugin will try to pull the If you do this, then the Watcher DevStack plugin will try to pull the
python-watcherclient repo from <your_local_git_repo>/../, so either make python-watcherclient repo from ``<your_local_git_repo>/../``, so either make
sure that is also available or specify WATCHERCLIENT_REPO in the local.conf sure that is also available or specify WATCHERCLIENT_REPO in the ``local.conf``
file. file.
Note: if you want to use a specific branch, specify WATCHER_BRANCH in the .. NOTE::
local.conf file. By default it will use the master branch. if you want to use a specific branch, specify WATCHER_BRANCH in the
local.conf file. By default it will use the master branch.
Note: watcher-api will default run under apache/httpd, set the variable .. Note::
WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd. watcher-api will default run under apache/httpd, set the variable
For development environment it is suggested to set WATHCER_USE_MOD_WSGI WATCHER_USE_MOD_WSGI=FALSE if you do not wish to run under apache/httpd.
to FALSE. For Production environment it is suggested to keep it at the For development environment it is suggested to set WATHCER_USE_MOD_WSGI
default TRUE value. to FALSE. For Production environment it is suggested to keep it at the
default TRUE value.
#. Start stacking from the controller node:: #. Start stacking from the controller node::
@ -136,8 +148,9 @@ Detailed DevStack Instructions
#. Start stacking on each of the compute nodes using the same command. #. Start stacking on each of the compute nodes using the same command.
#. Configure the environment for live migration via NFS. See the .. seealso::
`Multi-Node DevStack Environment`_ section for more details. Configure the environment for live migration via NFS. See the
`Multi-Node DevStack Environment`_ section for more details.
.. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller .. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller
.. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute .. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute
@ -149,60 +162,19 @@ Since deploying Watcher with only a single compute node is not very useful, a
few tips are given here for enabling a multi-node environment with live few tips are given here for enabling a multi-node environment with live
migration. migration.
Configuring NFS Server .. NOTE::
----------------------
If you would like to use live migration for shared storage, then the controller Nova supports live migration with local block storage so by default NFS
can serve as the NFS server if needed:: is not required and is considered an advance configuration.
The minimum requirements for live migration are:
sudo apt-get install nfs-kernel-server - all hostnames are resolvable on each host
sudo mkdir -p /nfs/instances - all hosts have a passwordless ssh key that is trusted by the other hosts
sudo chown stack:stack /nfs/instances - all hosts have a known_hosts file that lists each hosts
Add an entry to `/etc/exports` with the appropriate gateway and netmask If these requirements are met live migration will be possible.
information:: Shared storage such as ceph, booting form cinder volume or nfs are recommend
when testing evacuate if you want to preserve vm data.
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
Export the NFS directories::
sudo exportfs -ra
Make sure the NFS server is running::
sudo service nfs-kernel-server status
If the server is not running, then start it::
sudo service nfs-kernel-server start
Configuring NFS on Compute Node
-------------------------------
Each compute node needs to use the NFS server to hold the instance data::
sudo apt-get install rpcbind nfs-common
mkdir -p /opt/stack/data/instances
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
If you would like to have the NFS directory automatically mounted on reboot,
then add the following to `/etc/fstab`::
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
Edit `/etc/libvirt/libvirtd.conf` to make sure the following values are set::
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
Edit `/etc/default/libvirt-bin`::
libvirtd_opts="-d -l"
Restart the libvirt service::
sudo service libvirt-bin restart
Setting up SSH keys between compute nodes to enable live migration Setting up SSH keys between compute nodes to enable live migration
------------------------------------------------------------------ ------------------------------------------------------------------
@ -231,22 +203,91 @@ must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute every compute node's public ECDSA key needs to be in every other compute
node's root user's known_hosts file. node's root user's known_hosts file.
Disable serial console Configuring NFS Server (ADVANCED)
---------------------- ---------------------------------
Serial console needs to be disabled for live migration to work. If you would like to use live migration for shared storage, then the controller
can serve as the NFS server if needed
On both the controller and compute node, in /etc/nova/nova.conf .. code-block:: bash
[serial_console] sudo apt-get install nfs-kernel-server
enabled = False sudo mkdir -p /nfs/instances
sudo chown stack:stack /nfs/instances
Alternatively, in devstack's local.conf: Add an entry to ``/etc/exports`` with the appropriate gateway and netmask
information
[[post-config|$NOVA_CONF]]
[serial_console]
#enabled=false
.. code-block:: bash
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
Export the NFS directories
.. code-block:: bash
sudo exportfs -ra
Make sure the NFS server is running
.. code-block:: bash
sudo service nfs-kernel-server status
If the server is not running, then start it
.. code-block:: bash
sudo service nfs-kernel-server start
Configuring NFS on Compute Node (ADVANCED)
------------------------------------------
Each compute node needs to use the NFS server to hold the instance data
.. code-block:: bash
sudo apt-get install rpcbind nfs-common
mkdir -p /opt/stack/data/instances
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
If you would like to have the NFS directory automatically mounted on reboot,
then add the following to ``/etc/fstab``
.. code-block:: bash
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
Configuring libvirt to listen on tcp (ADVANCED)
-----------------------------------------------
.. NOTE::
By default nova will use ssh as a transport for live migration
if you have a low bandwidth connection you can use tcp instead
however this is generally not recommended.
Edit ``/etc/libvirt/libvirtd.conf`` to make sure the following values are set
.. code-block:: ini
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
Edit ``/etc/default/libvirt-bin``
.. code-block:: ini
libvirtd_opts="-d -l"
Restart the libvirt service
.. code-block:: bash
sudo service libvirt-bin restart
VNC server configuration VNC server configuration
------------------------ ------------------------
@ -254,13 +295,18 @@ VNC server configuration
The VNC server listening parameter needs to be set to any address so The VNC server listening parameter needs to be set to any address so
that the server can accept connections from all of the compute nodes. that the server can accept connections from all of the compute nodes.
On both the controller and compute node, in /etc/nova/nova.conf On both the controller and compute node, in ``/etc/nova/nova.conf``
vncserver_listen = 0.0.0.0 .. code-block:: ini
Alternatively, in devstack's local.conf: [vnc]
server_listen = "0.0.0.0"
VNCSERVER_LISTEN=0.0.0.0 Alternatively, in devstack's ``local.conf``:
.. code-block:: bash
VNCSERVER_LISTEN="0.0.0.0"
Environment final checkup Environment final checkup

View File

@ -43,7 +43,7 @@ different version of the above, please document your configuration here!
Getting the latest code Getting the latest code
======================= =======================
Make a clone of the code from our `Git repository`: Make a clone of the code from our ``Git repository``:
.. code-block:: bash .. code-block:: bash
@ -72,9 +72,9 @@ These dependencies can be installed from PyPi_ using the Python tool pip_.
.. _PyPi: https://pypi.org/ .. _PyPi: https://pypi.org/
.. _pip: https://pypi.org/project/pip .. _pip: https://pypi.org/project/pip
However, your system *may* need additional dependencies that `pip` (and by However, your system *may* need additional dependencies that ``pip`` (and by
extension, PyPi) cannot satisfy. These dependencies should be installed extension, PyPi) cannot satisfy. These dependencies should be installed
prior to using `pip`, and the installation method may vary depending on prior to using ``pip``, and the installation method may vary depending on
your platform. your platform.
* Ubuntu 16.04:: * Ubuntu 16.04::
@ -141,7 +141,7 @@ forget to activate it:
$ workon watcher $ workon watcher
You should then be able to `import watcher` using Python without issue: You should then be able to ``import watcher`` using Python without issue:
.. code-block:: bash .. code-block:: bash

View File

@ -90,15 +90,15 @@ parameter will need to specify the type of http protocol and the use of
plain text http is strongly discouraged due to the transmission of the access plain text http is strongly discouraged due to the transmission of the access
token. Additionally the path to the proxy interface needs to be supplied as token. Additionally the path to the proxy interface needs to be supplied as
well in case Grafana is placed in a sub directory of the web server. An example well in case Grafana is placed in a sub directory of the web server. An example
would be: `https://mygrafana.org/api/datasource/proxy/` were would be: ``https://mygrafana.org/api/datasource/proxy/`` were
`/api/datasource/proxy` is the default path without any subdirectories. ``/api/datasource/proxy`` is the default path without any subdirectories.
Likewise, this parameter can not be placed in the yaml. Likewise, this parameter can not be placed in the yaml.
To prevent many errors from occurring and potentially filing the logs files it To prevent many errors from occurring and potentially filing the logs files it
is advised to specify the desired datasource in the configuration as it would is advised to specify the desired datasource in the configuration as it would
prevent the datasource manager from having to iterate and try possible prevent the datasource manager from having to iterate and try possible
datasources with the launch of each audit. To do this specify `datasources` in datasources with the launch of each audit. To do this specify
the `[watcher_datasources]` group. ``datasources`` in the ``[watcher_datasources]`` group.
The current configuration that is required to be placed in the traditional The current configuration that is required to be placed in the traditional
configuration file would look like the following: configuration file would look like the following:
@ -120,7 +120,7 @@ traditional configuration file or in the yaml, however, it is not advised to
mix and match but in the case it does occur the yaml would override the mix and match but in the case it does occur the yaml would override the
settings from the traditional configuration file. All five of these parameters settings from the traditional configuration file. All five of these parameters
are dictionaries mapping specific metrics to a configuration parameter. For are dictionaries mapping specific metrics to a configuration parameter. For
instance the `project_id_map` will specify the specific project id in Grafana instance the ``project_id_map`` will specify the specific project id in Grafana
to be used. The parameters are named as follow: to be used. The parameters are named as follow:
* project_id_map * project_id_map
@ -149,10 +149,10 @@ project_id
The project id's can only be determined by someone with the admin role in The project id's can only be determined by someone with the admin role in
Grafana as that role is required to open the list of projects. The list of Grafana as that role is required to open the list of projects. The list of
projects can be found on `/datasources` in the web interface but projects can be found on ``/datasources`` in the web interface but
unfortunately it does not immediately display the project id. To display unfortunately it does not immediately display the project id. To display
the id one can best hover the mouse over the projects and the url will show the the id one can best hover the mouse over the projects and the url will show the
project id's for example `/datasources/edit/7563`. Alternatively the entire project id's for example ``/datasources/edit/7563``. Alternatively the entire
list of projects can be retrieved using the `REST api`_. To easily make list of projects can be retrieved using the `REST api`_. To easily make
requests to the REST api a tool such as Postman can be used. requests to the REST api a tool such as Postman can be used.
@ -239,18 +239,24 @@ conversion from bytes to megabytes.
SELECT value/1000000 FROM memory... SELECT value/1000000 FROM memory...
Queries will be formatted using the .format string method within Python. This Queries will be formatted using the .format string method within Python.
format will currently have give attributes exposed to it labeled `{0}` to This format will currently have give attributes exposed to it labeled
`{4}`. Every occurrence of these characters within the string will be replaced ``{0}`` through ``{4}``.
Every occurrence of these characters within the string will be replaced
with the specific attribute. with the specific attribute.
- {0} is the aggregate typically `mean`, `min`, `max` but `count` is also {0}
supported. is the aggregate typically ``mean``, ``min``, ``max`` but ``count``
- {1} is the attribute as specified in the attribute parameter. is also supported.
- {2} is the period of time to aggregate data over in seconds. {1}
- {3} is the granularity or the interval between data points in seconds. is the attribute as specified in the attribute parameter.
- {4} is translator specific and in the case of InfluxDB it will be used for {2}
retention_periods. is the period of time to aggregate data over in seconds.
{3}
is the granularity or the interval between data points in seconds.
{4}
is translator specific and in the case of InfluxDB it will be used for
retention_periods.
**InfluxDB** **InfluxDB**

View File

@ -9,7 +9,7 @@
... ...
connection = mysql+pymysql://watcher:WATCHER_DBPASS@controller/watcher?charset=utf8 connection = mysql+pymysql://watcher:WATCHER_DBPASS@controller/watcher?charset=utf8
* In the `[DEFAULT]` section, configure the transport url for RabbitMQ message broker. * In the ``[DEFAULT]`` section, configure the transport url for RabbitMQ message broker.
.. code-block:: ini .. code-block:: ini
@ -20,7 +20,7 @@
Replace the RABBIT_PASS with the password you chose for OpenStack user in RabbitMQ. Replace the RABBIT_PASS with the password you chose for OpenStack user in RabbitMQ.
* In the `[keystone_authtoken]` section, configure Identity service access. * In the ``[keystone_authtoken]`` section, configure Identity service access.
.. code-block:: ini .. code-block:: ini
@ -39,7 +39,7 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service. Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* Watcher interacts with other OpenStack projects via project clients, in order to instantiate these * Watcher interacts with other OpenStack projects via project clients, in order to instantiate these
clients, Watcher requests new session from Identity service. In the `[watcher_clients_auth]` section, clients, Watcher requests new session from Identity service. In the ``[watcher_clients_auth]`` section,
configure the identity service access to interact with other OpenStack project clients. configure the identity service access to interact with other OpenStack project clients.
.. code-block:: ini .. code-block:: ini
@ -56,7 +56,7 @@
Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service. Replace WATCHER_PASS with the password you chose for the watcher user in the Identity service.
* In the `[api]` section, configure host option. * In the ``[api]`` section, configure host option.
.. code-block:: ini .. code-block:: ini
@ -66,7 +66,7 @@
Replace controller with the IP address of the management network interface on your controller node, typically 10.0.0.11 for the first node in the example architecture. Replace controller with the IP address of the management network interface on your controller node, typically 10.0.0.11 for the first node in the example architecture.
* In the `[oslo_messaging_notifications]` section, configure the messaging driver. * In the ``[oslo_messaging_notifications]`` section, configure the messaging driver.
.. code-block:: ini .. code-block:: ini

View File

@ -132,8 +132,8 @@ audit) that you want to use.
$ openstack optimize audit create -a <your_audit_template> $ openstack optimize audit create -a <your_audit_template>
If your_audit_template was created by --strategy <your_strategy>, and it If your_audit_template was created by --strategy <your_strategy>, and it
defines some parameters (command `watcher strategy show` to check parameters defines some parameters (command ``watcher strategy show`` to check parameters
format), your can append `-p` to input required parameters: format), your can append ``-p`` to input required parameters:
.. code:: bash .. code:: bash

View File

@ -1,7 +1,8 @@
Rally job Rally job
========= =========
We provide, with Watcher, a Rally plugin you can use to benchmark the optimization service. We provide, with Watcher, a Rally plugin you can use to benchmark
the optimization service.
To launch this task with configured Rally you just need to run: To launch this task with configured Rally you just need to run: