openstack-manuals/doc/common/tables/ceilometer-common.xml
Andreas Jaeger 1d637110f1 Regenerate conf tables
Update tables, create some new tables and add them to the config
reference.
Remove ml2-cisco-apic table, it is now part of the ml_cisco table.

Closes-Bug: #1364357

Change-Id: I27220e4ff5a553aa4a92b9bff1dc761af2eeacb8
2014-09-02 20:55:49 +02:00

72 lines
3.2 KiB
XML

<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_ceilometer_common">
<caption>Description of common configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td>backdoor_port = None</td>
<td>(StrOpt) Enable eventlet backdoor. Acceptable values are 0, &lt;port&gt;, and &lt;start&gt;:&lt;end&gt;, where 0 results in listening on a random tcp port number; &lt;port&gt; results in listening on the specified port number (and not enabling backdoor if that port is in use); and &lt;start&gt;:&lt;end&gt; results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file.</td>
</tr>
<tr>
<td>disable_process_locking = False</td>
<td>(BoolOpt) Enables or disables inter-process locks.</td>
</tr>
<tr>
<td>host = localhost</td>
<td>(StrOpt) Name of this node, which must be valid in an AMQP key. Can be an opaque identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.</td>
</tr>
<tr>
<td>lock_path = None</td>
<td>(StrOpt) Directory to use for lock files.</td>
</tr>
<tr>
<td>memcached_servers = None</td>
<td>(ListOpt) Memcached servers or None for in process cache.</td>
</tr>
<tr>
<td>notification_workers = 1</td>
<td>(IntOpt) Number of workers for notification service. A single notification agent is enabled by default.</td>
</tr>
<tr>
<th colspan="2">[central]</th>
</tr>
<tr>
<td>partitioning_group_prefix = None</td>
<td>(StrOpt) Work-load partitioning group prefix. Use only if you want to run multiple central agents with different config files. For each sub-group of the central agent pool with the same partitioning_group_prefix a disjoint subset of pollsters should be loaded.</td>
</tr>
<tr>
<th colspan="2">[compute]</th>
</tr>
<tr>
<td>workload_partitioning = False</td>
<td>(BoolOpt) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.</td>
</tr>
<tr>
<th colspan="2">[coordination]</th>
</tr>
<tr>
<td>backend_url = None</td>
<td>(StrOpt) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won't do workload partitioning and will only function correctly if a single instance of that service is running.</td>
</tr>
<tr>
<td>heartbeat = 1.0</td>
<td>(FloatOpt) Number of seconds between heartbeats for distributed coordination (float)</td>
</tr>
</tbody>
</table>
</para>