Rally provides a framework for performance analysis and benchmarking of individual OpenStack components as well as full production OpenStack cloud deployments
Go to file
Boris Pavlovic 64d34ae594 Refactor generic cleanup mechanism part 3
Current cleanup mechanism is awful. It's hardcoded, without repeat on failures,
contains a lot of mistakes, in some places it is too fast (in case of deletion of VMs)
in other to slow (like deletion of users). As well there is mix of cleanup mechanism
and resource cleanup mechanism.
To resolve these all issues this patch introcude a cleanup engine that resolves all issues
above.
It's resource based, so to add new resouce you should just a make subclass of base.Resource
and probably override some of methods like (list, delete, is_deleted) and that's all.
All complexity of managing:
0) waiting until async deletion is finished
1) repeat on failure logic
2) greaceful failure handling
3) parallelization
4) plugin support
Is hidden deep inside cleanup engine
+ bonus we are able to specify now single resource (without clenaping whole service)

PART 3:
-------

*) Refactor all contexts cleanups method to use new generic cleanup engine
   insted of cleanup.utils
*) Remove obsolate cleanup.utils
*) Fix all tests

bp benchmark-context-cleanup-refactor

Change-Id: I70557e6ebb56bbe565792d9ee854d3e78428a881
2014-11-14 16:44:00 +04:00
contrib/devstack Support for benchmarking with existing users (part 1) 2014-09-06 23:31:33 +04:00
doc Refactor generic cleanup mechanism part 3 2014-11-14 16:44:00 +04:00
etc/rally update rally.conf.sample 2014-09-19 13:17:10 +03:00
rally Refactor generic cleanup mechanism part 3 2014-11-14 16:44:00 +04:00
rally-scenarios Refactor UserGenerator 2014-11-13 12:39:58 +04:00
tests Refactor generic cleanup mechanism part 3 2014-11-14 16:44:00 +04:00
tools Merge "Add ability to compare two verification results" 2014-11-07 07:16:09 +00:00
.coveragerc Omit openstack/common/ in test coverage reports 2014-01-16 00:53:49 +04:00
.gitignore Prepare documentation for readthedocs 2014-06-05 17:45:51 +03:00
.gitreview Add .gitreview file 2013-09-06 19:37:42 +04:00
.testr.conf Reorganize test module structure 2014-10-07 13:50:40 +00:00
babel.cfg Add rally.sample.conf to project 2013-08-14 14:08:09 +04:00
install_rally.sh Adding logic to clean up old rally installs 2014-10-16 13:29:25 -04:00
LICENSE Initial commit 2013-08-03 09:17:25 -07:00
openstack-common.conf Port to use oslo.i18n 2014-10-23 19:32:56 +03:00
README.rst Fixes typo error in rally/README.rst 2014-06-09 14:46:14 +05:30
requirements.txt Updated from global requirements 2014-11-13 22:36:09 +00:00
setup.cfg Fix bash completition setup 2014-10-13 23:24:24 +04:00
setup.py Fix bash completition setup 2014-10-13 23:24:24 +04:00
test-requirements.txt Updated from global requirements 2014-11-05 08:37:05 +00:00
tox.ini Port to use oslo.i18n 2014-10-23 19:32:56 +03:00

Rally

What is Rally

Rally is a Benchmark-as-a-Service project for OpenStack.

Rally is intended to provide the community with a benchmarking tool that is capable of performing specific, complicated and reproducible test cases on real deployment scenarios.

If you are here, you are probably familiar with OpenStack and you also know that it's a really huge ecosystem of cooperative services. When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on "what", "why" and "where" has happened. Another reason why you could be here is that you would like to build an OpenStack CI/CD system that will allow you to improve SLA, performance and stability of OpenStack continuously.

The OpenStack QA team mostly works on CI/CD that ensures that new patches don't break some specific single node installation of OpenStack. On the other hand it's clear that such CI/CD is only an indication and does not cover all cases (e.g. if a cloud works well on a single node installation it doesn't mean that it will continue to do so on a 1k servers installation under high load as well). Rally aims to fix this and help us to answer the question "How does OpenStack work at scale?". To make it possible, we are going to automate and unify all steps that are required for benchmarking OpenStack at scale: multi-node OS deployment, verification, benchmarking & profiling.

Rally workflow can be visualized by the following diagram:

Rally Architecture

Architecture

In terms of software architecture, Rally is built of 4 main components:

  1. Server Providers - provide servers (virtual servers), with ssh access, in one L3 network.
  2. Deploy Engines - deploy OpenStack cloud on servers that are presented by Server Providers
  3. Verification - component that runs tempest (or another specific set of tests) against a deployed cloud, collects results & presents them in human readable form.
  4. Benchmark engine - allows to write parameterized benchmark scenarios & run them against the cloud.

Use Cases

There are 3 major high level Rally Use Cases:

Rally Use Cases

Typical cases where Rally aims to help are:

  • Automate measuring & profiling focused on how new code changes affect the OS performance;
  • Using Rally profiler to detect scaling & performance issues;
  • Investigate how different deployments affect the OS performance:
    • Find the set of suitable OpenStack deployment architectures;
    • Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
  • Automate the search for hardware best suited for particular OpenStack cloud;
  • Automate the production cloud specification generation:
    • Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
    • Check performance of basic cloud operations in case of different loads.

Wiki page:

https://wiki.openstack.org/wiki/Rally

Rally/HowTo:

https://wiki.openstack.org/wiki/Rally/HowTo

Launchpad page:

https://launchpad.net/rally

Code is hosted on github:

https://github.com/stackforge/rally

Trello board:

https://trello.com/b/DoD8aeZy/rally