[config-ref] Update nova tables

Change-Id: I7b5338c441217822b0dfe2a9a4522ab190801cc7
Closes-Bug: #1539901
Closes-Bug: #1541181
Closes-Bug: #1534872
Closes-Bug: #1542421
Closes-Bug: #1531025
Closes-Bug: #1536840
Partial-Bug: #1487685
Partial-Bug: #1532210
Partial-Bug: #1532971
This commit is contained in:
venkatamahesh 2016-02-22 21:07:27 +05:30
parent b5f606adb1
commit 2cae8a1c7d
38 changed files with 275 additions and 417 deletions

View File

@ -20,6 +20,8 @@ OpenStack Compute service, run
.. include:: ../tables/nova-ca.rst
.. include:: ../tables/nova-cache.rst
.. include:: ../tables/nova-cells.rst
.. include:: ../tables/nova-common.rst
@ -38,8 +40,6 @@ OpenStack Compute service, run
.. include:: ../tables/nova-debug.rst
.. include:: ../tables/nova-ec2.rst
.. include:: ../tables/nova-ephemeral_storage_encryption.rst
.. include:: ../tables/nova-fping.rst
@ -107,7 +107,3 @@ OpenStack Compute service, run
.. include:: ../tables/nova-vpn.rst
.. include:: ../tables/nova-xen.rst
.. include:: ../tables/nova-xvpvncproxy.rst
.. include:: ../tables/nova-zookeeper.rst

View File

@ -82,10 +82,6 @@ To enable SSL, set the ``qpid_protocol`` option:
qpid_protocol=ssl
This table lists additional options that can be used to configure the Qpid
messaging driver for OpenStack Oslo RPC. These options are used infrequently.
.. include:: ../tables/nova-qpid.rst
Configure ZeroMQ
~~~~~~~~~~~~~~~~

View File

@ -22,11 +22,5 @@
- (StrOpt) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
* - ``default_publisher_id`` = ``None``
- (StrOpt) Default publisher_id for outgoing notifications
* - ``notification_driver`` = ``[]``
- (MultiStrOpt) The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
* - ``notification_topics`` = ``notifications``
- (ListOpt) AMQP topic used for OpenStack notifications.
* - ``notification_transport_url`` = ``None``
- (StrOpt) A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
* - ``transport_url`` = ``None``
- (StrOpt) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration.

View File

@ -24,3 +24,5 @@
- (ListOpt) DEPRECATED: A list of v2.1 API extensions to never load. Specify the extension aliases here. This option will be removed in the near future. After that point you have to run all of the API.
* - ``extensions_whitelist`` =
- (ListOpt) DEPRECATED: If the list is not empty then a v2.1 API extension will only be loaded if it exists in this list. Specify the extension aliases here. This option will be removed in the near future. After that point you have to run all of the API.
* - ``project_id_regex`` = ``None``
- (StrOpt) DEPRECATED: The validation regex for project_ids used in urls. This defaults to [0-9a-f\-]+ if not set, which matches normal uuids created by keystone.

View File

@ -30,14 +30,14 @@
- (StrOpt) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
* - ``auth_host`` = ``127.0.0.1``
- (StrOpt) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
* - ``auth_plugin`` = ``None``
- (StrOpt) Name of the plugin to load
* - ``auth_port`` = ``35357``
- (IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
* - ``auth_protocol`` = ``https``
- (StrOpt) Protocol of the admin Identity API endpoint (http or https). Deprecated, use identity_uri.
- (StrOpt) Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
* - ``auth_section`` = ``None``
- (StrOpt) Config Section from which to load plugin specific options
- (Opt) Config Section from which to load plugin specific options
* - ``auth_type`` = ``None``
- (Opt) Authentication type to load
* - ``auth_uri`` = ``None``
- (StrOpt) Complete public Identity API endpoint.
* - ``auth_version`` = ``None``
@ -81,7 +81,7 @@
* - ``memcache_secret_key`` = ``None``
- (StrOpt) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
* - ``memcache_security_strategy`` = ``None``
- (StrOpt) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
- (StrOpt) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
* - ``memcache_use_advanced_pool`` = ``False``
- (BoolOpt) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
* - ``region_name`` = ``None``

View File

@ -27,7 +27,7 @@
* - ``cert_manager`` = ``nova.cert.manager.CertManager``
- (StrOpt) Full class name for the Manager for cert
* - ``cert_topic`` = ``cert``
- (StrOpt) The topic cert nodes listen on
- (StrOpt) Determines the RPC topic that the cert nodes listen on. The default is 'cert', and for most deployments there is no need to ever change it. Possible values: Any string. * Services which consume this: ``nova-cert`` * Related options: None
* - ``crl_file`` = ``crl.pem``
- (StrOpt) Filename of root Certificate Revocation List
* - ``key_file`` = ``private/cakey.pem``
@ -46,15 +46,3 @@
- (BoolOpt) Should we use a CA for each project?
* - ``user_cert_subject`` = ``/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s``
- (StrOpt) Subject for certificate for users, %s for project, user, timestamp
* - **[ssl]**
-
* - ``ca_file`` = ``None``
- (StrOpt) CA certificate file to use to verify connecting clients.
* - ``cert_file`` = ``None``
- (StrOpt) Certificate file to use when starting the server securely.
* - ``ciphers`` = ``None``
- (StrOpt) Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format.
* - ``key_file`` = ``None``
- (StrOpt) Private key file to use when starting the server securely.
* - ``version`` = ``None``
- (StrOpt) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.

View File

@ -0,0 +1,46 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-cache:
.. list-table:: Description of Cache configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[cache]**
-
* - ``backend`` = ``dogpile.cache.null``
- (StrOpt) Dogpile.cache backend module. It is recommended that Memcache with pooling (oslo_cache.memcache_pool) or Redis (dogpile.cache.redis) be used in production deployments. Small workloads (single process) like devstack can use the dogpile.cache.memory backend.
* - ``backend_argument`` = ``[]``
- (MultiStrOpt) Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>".
* - ``config_prefix`` = ``cache.oslo``
- (StrOpt) Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.
* - ``debug_cache_backend`` = ``False``
- (BoolOpt) Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.
* - ``enabled`` = ``False``
- (BoolOpt) Global toggle for caching.
* - ``expiration_time`` = ``600``
- (IntOpt) Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it.
* - ``memcache_dead_retry`` = ``300``
- (IntOpt) Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
* - ``memcache_pool_connection_get_timeout`` = ``10``
- (IntOpt) Number of seconds that an operation will wait to get a memcache client connection.
* - ``memcache_pool_maxsize`` = ``10``
- (IntOpt) Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).
* - ``memcache_pool_unused_timeout`` = ``60``
- (IntOpt) Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).
* - ``memcache_servers`` = ``localhost:11211``
- (ListOpt) Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
* - ``memcache_socket_timeout`` = ``3``
- (IntOpt) Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
* - ``proxies`` =
- (ListOpt) Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.

View File

@ -19,38 +19,38 @@
* - **[cells]**
-
* - ``call_timeout`` = ``60``
- (IntOpt) Seconds to wait for response from a call to a cell.
- (IntOpt) Call timeout Cell messaging module waits for response(s) to be put into the eventlet queue. This option defines the seconds waited for response from a call to a cell. Possible values: * Time in seconds. Services which consume this: * nova-cells Related options: * None
* - ``capabilities`` = ``hypervisor=xenserver;kvm, os=linux;windows``
- (ListOpt) Key/Multi-value list with the capabilities of the cell
- (ListOpt) Cell capabilities List of arbitrary key=value pairs defining capabilities of the current cell to be sent to the parent cells. These capabilities are intended to be used in cells scheduler filters/weighers. Possible values: * key=value pairs list for example; ``hypervisor=xenserver;kvm,os=linux;windows`` Services which consume this: * nova-cells Related options: * None
* - ``cell_type`` = ``compute``
- (StrOpt) Type of cell
- (StrOpt) Type of cell When cells feature is enabled the hosts in the OpenStack Compute cloud are partitioned into groups. Cells are configured as a tree. The top-level cell's cell_type must be set to ``api``. All other cells are defined as a ``compute cell`` by default. Possible values: * api: Cell type of top-level cell. * compute: Cell type of all child cells. (Default) Services which consume this: * nova-cells Related options: * compute_api_class: This option must be set to cells api driver for the top-level cell (nova.compute.cells_api.ComputeCellsAPI) * quota_driver: Disable quota checking for the child cells. (nova.quota.NoopQuotaDriver)
* - ``cells_config`` = ``None``
- (StrOpt) Configuration file from which to read cells configuration. If given, overrides reading cells from the database.
* - ``db_check_interval`` = ``60``
- (IntOpt) Interval, in seconds, for getting fresh cell information from the database.
* - ``driver`` = ``nova.cells.rpc_driver.CellsRPCDriver``
- (StrOpt) Cells communication driver to use
- (StrOpt) Cells communication driver Driver for cell<->cell communication via RPC. This is used to setup the RPC consumers as well as to send a message to another cell. 'nova.cells.rpc_driver.CellsRPCDriver' starts up 2 separate servers for handling inter-cell communication via RPC. Possible values: * 'nova.cells.rpc_driver.CellsRPCDriver' is the default driver * Otherwise it should be the full Python path to the class to be used Services which consume this: * nova-cells Related options: * None
* - ``enable`` = ``False``
- (BoolOpt) Enable cell functionality
- (BoolOpt) Enable cell functionality When this functionality is enabled, it lets you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. Possible values: * True: Enables the feature * False: Disables the feature Services which consume this: * nova-api * nova-cells * nova-compute Related options: * name: A unique cell name must be given when this functionality is enabled. * cell_type: Cell type should be defined for all cells.
* - ``instance_update_num_instances`` = ``1``
- (IntOpt) Number of instances to update per periodic task run
- (IntOpt) Instance update num instances On every run of the periodic task, nova cells manager will attempt to sync instance_updated_at_threshold number of instances. When the manager gets the list of instances, it shuffles them so that multiple nova-cells services do not attempt to sync the same instances in lockstep. Possible values: * Positive integer number Services which consume this: * nova-cells Related options: * This value is used with the ``instance_updated_at_threshold`` value in a periodic task run.
* - ``instance_update_sync_database_limit`` = ``100``
- (IntOpt) Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through
- (IntOpt) Instance update sync database limit Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through. Possible values: * Number of instances. Services which consume this: * nova-cells Related options: * None
* - ``instance_updated_at_threshold`` = ``3600``
- (IntOpt) Number of seconds after an instance was updated or deleted to continue to update cells
- (IntOpt) Instance updated at threshold Number of seconds after an instance was updated or deleted to continue to update cells. This option lets cells manager to only attempt to sync instances that have been updated recently. i.e., a threshold of 3600 means to only update instances that have modified in the last hour. Possible values: * Threshold in seconds Services which consume this: * nova-cells Related options: * This value is used with the ``instance_update_num_instances`` value in a periodic task run.
* - ``manager`` = ``nova.cells.manager.CellsManager``
- (StrOpt) Manager for cells
- (StrOpt) Manager for cells The nova-cells manager class. This class defines RPC methods that the local cell may call. This class is NOT used for messages coming from other cells. That communication is driver-specific. Communication to other cells happens via the nova.cells.messaging module. The MessageRunner from that module will handle routing the message to the correct cell via the communication driver. Most methods below create 'targeted' (where we want to route a message to a specific cell) or 'broadcast' (where we want a message to go to multiple cells) messages. Scheduling requests get passed to the scheduler class. Possible values: * 'nova.cells.manager.CellsManager' is the only possible value for this option as of the Mitaka release Services which consume this: * nova-cells Related options: * None
* - ``max_hop_count`` = ``10``
- (IntOpt) Maximum number of hops for cells routing.
- (IntOpt) Maximum hop count When processing a targeted message, if the local cell is not the target, a route is defined between neighbouring cells. And the message is processed across the whole routing path. This option defines the maximum hop counts until reaching the target. Possible values: * Positive integer value Services which consume this: * nova-cells Related options: * None
* - ``mute_child_interval`` = ``300``
- (IntOpt) Number of seconds after which a lack of capability and capacity updates signals the child cell is to be treated as a mute.
- (IntOpt) Mute child interval Number of seconds after which a lack of capability and capacity update the child cell is to be treated as a mute cell. Then the child cell will be weighed as recommend highly that it be skipped. Possible values: * Time in seconds. Services which consume this: * nova-cells Related options: * None
* - ``mute_weight_multiplier`` = ``-10000.0``
- (FloatOpt) Multiplier used to weigh mute children. (The value should be negative.)
- (FloatOpt) Mute weight multiplier Multiplier used to weigh mute children. Mute children cells are recommended to be skipped so their weight is multiplied by this negative value. Possible values: * Negative numeric number Services which consume this: * nova-cells Related options: * None
* - ``name`` = ``nova``
- (StrOpt) Name of this cell
- (StrOpt) Name of the current cell This value must be unique for each cell. Name of a cell is used as its id, leaving this option unset or setting the same name for two or more cells may cause unexpected behaviour. Possible values: * Unique name string Services which consume this: * nova-cells Related options: * enabled: This option is meaningful only when cells service is enabled
* - ``offset_weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used to weigh offset weigher.
- (FloatOpt) Offset weight multiplier Multiplier used to weigh offset weigher. Cells with higher weight_offsets in the DB will be preferred. The weight_offset is a property of a cell stored in the database. It can be used by a deployer to have scheduling decisions favor or disfavor cells based on the setting. Possible values: * Numeric multiplier Services which consume this: * nova-cells Related options: * None
* - ``reserve_percent`` = ``10.0``
- (FloatOpt) Percentage of cell capacity to hold in reserve. Affects both memory and disk utilization
- (FloatOpt) Reserve percentage Percentage of cell capacity to hold in reserve, so the minimum amount of free resource is considered to be; min_free = total * (reserve_percent / 100.0) This option affects both memory and disk utilization. The primary purpose of this reserve is to ensure some space is available for users who want to resize their instance to be larger. Note that currently once the capacity expands into this reserve space this option is ignored. Possible values: * Float percentage value Services which consume this: * nova-cells Related options: * None
* - ``topic`` = ``cells``
- (StrOpt) The topic cells nodes listen on
- (StrOpt) Topic This is the message queue topic that cells nodes listen on. It is used when the cells service is started up to configure the queue, and whenever an RPC call to the scheduler is made. Possible values: * cells: This is the recommended and the default value. Services which consume this: * nova-cells Related options: * None

View File

@ -31,7 +31,7 @@
* - ``host`` = ``localhost``
- (StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address
* - ``memcached_servers`` = ``None``
- (ListOpt) Memcached servers or None for in process cache.
- (_DeprecatedListOpt) DEPRECATED: Memcached servers or None for in process cache. "memcached_servers" opt is deprecated in Mitaka. In Newton release oslo.cache config options should be used as this option will be removed. Please add a [cache] group in your nova.conf file and add "enable" and "memcache_servers" option in this section.
* - ``my_ip`` = ``10.0.0.1``
- (StrOpt) IP address of this host
* - ``notify_api_faults`` = ``False``

View File

@ -21,7 +21,7 @@
* - ``compute_available_monitors`` = ``None``
- (MultiStrOpt) Monitor classes available to the compute which may be specified more than once. This option is DEPRECATED and no longer used. Use setuptools entry points to list available monitor plugins.
* - ``compute_driver`` = ``None``
- (StrOpt) Driver to use for controlling virtualization. Options include: libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, ironic.IronicDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver
- (StrOpt) Defines which driver to use for controlling virtualization. Possible values: * ``libvirt.LibvirtDriver`` * ``xenapi.XenAPIDriver`` * ``fake.FakeDriver`` * ``ironic.IronicDriver`` * ``vmwareapi.VMwareVCDriver`` * ``hyperv.HyperVDriver`` Services which consume this: * ``nova-compute`` Interdependencies to other options: * None
* - ``compute_manager`` = ``nova.compute.manager.ComputeManager``
- (StrOpt) Full class name for the Manager for compute
* - ``compute_monitors`` =
@ -63,7 +63,7 @@
* - ``reboot_timeout`` = ``0``
- (IntOpt) Automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Set to 0 to disable.
* - ``reclaim_instance_interval`` = ``0``
- (IntOpt) Interval in seconds for reclaiming deleted instances
- (IntOpt) Interval in seconds for reclaiming deleted instances. It takes effect only when value is greater than 0.
* - ``rescue_timeout`` = ``0``
- (IntOpt) Automatically unrescue an instance after N seconds. Set to 0 to disable.
* - ``resize_confirm_window`` = ``0``
@ -87,6 +87,6 @@
* - ``update_resources_interval`` = ``0``
- (IntOpt) Interval in seconds for updating compute resources. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.
* - ``vif_plugging_is_fatal`` = ``True``
- (BoolOpt) Fail instance boot if vif plugging fails
- (BoolOpt) Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values: * True: Instances should fail after VIF plugging timeout * False: Instances should continue booting after VIF plugging timeout Services which consume this: * ``nova-compute`` Interdependencies to other options: * None
* - ``vif_plugging_timeout`` = ``300``
- (IntOpt) Number of seconds to wait for neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal). If this is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all.
- (IntOpt) Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see 'vif_plugging_is_fatal'). If this is set to zero and 'vif_plugging_is_fatal' is False, events should not be expected to arrive at all. Possible values: * A time interval in seconds Services which consume this: * ``nova-compute`` Interdependencies to other options: * None

View File

@ -1,54 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-ec2:
.. list-table:: Description of EC2 configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``ec2_dmz_host`` = ``$my_ip``
- (StrOpt) The internal IP address of the EC2 API server
* - ``ec2_host`` = ``$my_ip``
- (StrOpt) The IP address of the EC2 API server
* - ``ec2_listen`` = ``0.0.0.0``
- (StrOpt) The IP address on which the EC2 API will listen.
* - ``ec2_listen_port`` = ``8773``
- (IntOpt) The port on which the EC2 API will listen.
* - ``ec2_path`` = ``/``
- (StrOpt) The path prefix used to call the ec2 API server
* - ``ec2_port`` = ``8773``
- (IntOpt) The port of the EC2 API server
* - ``ec2_private_dns_show_ip`` = ``False``
- (BoolOpt) Return the IP address as private dns hostname in describe instances
* - ``ec2_scheme`` = ``http``
- (StrOpt) The protocol to use when connecting to the EC2 API server
* - ``ec2_strict_validation`` = ``True``
- (BoolOpt) Validate security group names according to EC2 specification
* - ``ec2_timestamp_expiry`` = ``300``
- (IntOpt) Time in seconds before ec2 timestamp expires
* - ``ec2_workers`` = ``None``
- (IntOpt) Number of workers for EC2 API service. The default will be equal to the number of CPUs available.
* - ``keystone_ec2_insecure`` = ``False``
- (BoolOpt) Disable SSL certificate verification.
* - ``keystone_ec2_url`` = ``http://localhost:5000/v2.0/ec2tokens``
- (StrOpt) URL to get token from ec2 request.
* - ``lockout_attempts`` = ``5``
- (IntOpt) Number of failed auths before lockout.
* - ``lockout_minutes`` = ``15``
- (IntOpt) Number of minutes to lockout if triggered.
* - ``lockout_window`` = ``15``
- (IntOpt) Number of minutes for lockout window.
* - ``region_list`` =
- (ListOpt) List of region=fqdn pairs separated by commas

View File

@ -27,15 +27,17 @@
* - ``api_insecure`` = ``False``
- (BoolOpt) Allow to perform insecure SSL (https) requests to glance
* - ``api_servers`` = ``None``
- (ListOpt) A list of the glance api servers available to nova. Prefix with https:// for ssl-based glance api servers. ([hostname|ip]:port)
- (ListOpt) A list of the glance api servers endpoints available to nova. These should be fully qualified urls of the form "scheme://hostname:port[/path]" (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image")
* - ``host`` = ``$my_ip``
- (StrOpt) Default glance hostname or IP address
- (StrOpt) Glance server hostname or IP address
* - ``num_retries`` = ``0``
- (IntOpt) Number of retries when uploading / downloading an image to / from glance.
* - ``port`` = ``9292``
- (IntOpt) Default glance port
- (IntOpt) Glance server port
* - ``protocol`` = ``http``
- (StrOpt) Default protocol to use when connecting to glance. Set to https for SSL.
- (StrOpt) Protocol to use when connecting to glance. Set to https for SSL.
* - ``verify_glance_signatures`` = ``False``
- (BoolOpt) Require Nova to perform signature verification on each image downloaded from Glance.
* - **[image_file_url]**
-
* - ``filesystems`` =

View File

@ -19,16 +19,16 @@
* - **[DEFAULT]**
-
* - ``default_ephemeral_format`` = ``None``
- (StrOpt) The default format an ephemeral_volume will be formatted with on creation.
- (StrOpt) The default format an ephemeral_volume will be formatted with on creation. Possible values: * ``ext2`` * ``ext3`` * ``ext4`` * ``xfs`` * ``ntfs`` (only for Windows guests) Services which consume this: * ``nova-compute`` Interdependencies to other options: * None
* - ``force_raw_images`` = ``True``
- (BoolOpt) Force backing images to raw format
* - ``preallocate_images`` = ``none``
- (StrOpt) VM image preallocation mode: "none" => no storage provisioning is done up front, "space" => storage is fully allocated at instance start
- (StrOpt) The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn't available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. Possible values: * "none" => no storage provisioning is done up front * "space" => storage is fully allocated at instance start Services which consume this: * ``nova-compute`` Interdependencies to other options: * None
* - ``timeout_nbd`` = ``10``
- (IntOpt) Amount of time, in seconds, to wait for NBD device start up.
* - ``use_cow_images`` = ``True``
- (BoolOpt) Whether to use cow images
- (BoolOpt) Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. Possible values: * True: Enable use of cow images * False: Disable use of cow images Services which consume this: * ``nova-compute`` Interdependencies to other options: * None
* - ``vcpu_pin_set`` = ``None``
- (StrOpt) Defines which pcpus that instance vcpus can use. For example, "4-12,^8,15"
- (StrOpt) Defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs). Possible values: * A comma-separated list of physical CPU numbers that virtual CPUs can be allocated to by default. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example: vcpu_pin_set = "4-12,^8,15" Services which consume this: * ``nova-scheduler`` * ``nova-compute`` Interdependencies to other options: * None
* - ``virt_mkfs`` = ``[]``
- (MultiStrOpt) Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command>

View File

@ -35,6 +35,6 @@
* - ``api_retry_interval`` = ``2``
- (IntOpt) How often to retry in seconds when a request does conflict
* - ``api_version`` = ``1``
- (IntOpt) Version of Ironic API service endpoint.
- (IntOpt) Version of Ironic API service endpoint. DEPRECATED: Setting the API version is not possible anymore.
* - ``client_log_level`` = ``None``
- (StrOpt) Log level override for ironicclient. Set this in order to override the global "default_log_levels", "verbose", and "debug" settings. DEPRECATED: use standard logging configuration.

View File

@ -65,11 +65,13 @@
* - ``iscsi_iface`` = ``None``
- (StrOpt) The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name.
* - ``iscsi_use_multipath`` = ``False``
- (BoolOpt) Use multipath connection of the iSCSI volume
- (BoolOpt) Use multipath connection of the iSCSI or FC volume
* - ``iser_use_multipath`` = ``False``
- (BoolOpt) Use multipath connection of the iSER volume
* - ``mem_stats_period_seconds`` = ``10``
- (IntOpt) A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.
* - ``realtime_scheduler_priority`` = ``1``
- (IntOpt) In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)
* - ``remove_unused_kernels`` = ``True``
- (BoolOpt) DEPRECATED: Should unused kernel images be removed? This is only safe to enable if all compute nodes have been updated to support this option (running Grizzly or newer level compute). This will be the default behavior in the 13.0.0 release.
* - ``remove_unused_resized_minimum_age_seconds`` = ``3600``

View File

@ -36,7 +36,11 @@
- (IntOpt) Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps
* - ``live_migration_flag`` = ``VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED``
- (StrOpt) Migration flags to be set for live migration
* - ``live_migration_inbound_addr`` = ``None``
- (StrOpt) Live migration target ip or hostname (if this option is set to be None,the hostname of the migration targetcompute node will be used)
* - ``live_migration_progress_timeout`` = ``150``
- (IntOpt) Time to wait, in seconds, for migration to make forward progress in transferring data before aborting the operation. Set to 0 to disable timeouts.
* - ``live_migration_uri`` = ``qemu+tcp://%s/system``
- (StrOpt) Migration target URI (any included "%s" is replaced with the migration target hostname)
* - ``live_migration_tunnelled`` = ``None``
- (BoolOpt) Whether to use tunnelled migration, where migration data is transported over the libvirtd connection. If True, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor.
* - ``live_migration_uri`` = ``None``
- (StrOpt) Override the default libvirt live migration target URI (which is dependent on virt_type) (any included "%s" is replaced with the migration target hostname)

View File

@ -19,9 +19,9 @@
* - **[DEFAULT]**
-
* - ``debug`` = ``False``
- (BoolOpt) Print debugging output (set logging level to DEBUG instead of default INFO level).
* - ``default_log_levels`` = ``amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN``
- (ListOpt) List of logger=LEVEL pairs. This option is ignored if log_config_append is set.
- (BoolOpt) If set to true, the logging level will be set to DEBUG instead of the default INFO level.
* - ``default_log_levels`` = ``amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN, keystoneauth=WARN, oslo.cache=INFO, dogpile.core.dogpile=INFO``
- (ListOpt) List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
* - ``fatal_deprecations`` = ``False``
- (BoolOpt) Enables or disables fatal status of deprecations.
* - ``fatal_exception_format_errors`` = ``False``
@ -31,25 +31,23 @@
* - ``instance_uuid_format`` = ``"[instance: %(uuid)s] "``
- (StrOpt) The format for an instance UUID that is passed with the log message.
* - ``log_config_append`` = ``None``
- (StrOpt) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log_format).
- (StrOpt) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string).
* - ``log_date_format`` = ``%Y-%m-%d %H:%M:%S``
- (StrOpt) Format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
- (StrOpt) Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
* - ``log_dir`` = ``None``
- (StrOpt) (Optional) The base directory used for relative --log-file paths. This option is ignored if log_config_append is set.
- (StrOpt) (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.
* - ``log_file`` = ``None``
- (StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout. This option is ignored if log_config_append is set.
* - ``log_format`` = ``None``
- (StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and logging_default_format_string instead. This option is ignored if log_config_append is set.
- (StrOpt) (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.
* - ``logging_context_format_string`` = ``%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s``
- (StrOpt) Format string to use for log messages with context.
* - ``logging_debug_format_suffix`` = ``%(funcName)s %(pathname)s:%(lineno)d``
- (StrOpt) Data to append to log format when level is DEBUG.
- (StrOpt) Additional data to append to log message when logging level for the message is DEBUG.
* - ``logging_default_format_string`` = ``%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s``
- (StrOpt) Format string to use for log messages without context.
- (StrOpt) Format string to use for log messages when context is undefined.
* - ``logging_exception_prefix`` = ``%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s``
- (StrOpt) Prefix each line of exception output with this format.
* - ``logging_user_identity_format`` = ``%(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s``
- (StrOpt) Format string for user_identity field of the logging_context_format_string
- (StrOpt) Defines the format string for %(user_identity)s that is used in logging_context_format_string.
* - ``publish_errors`` = ``False``
- (BoolOpt) Enables or disables publication of error events.
* - ``syslog_log_facility`` = ``LOG_USER``
@ -59,8 +57,8 @@
* - ``use_syslog`` = ``False``
- (BoolOpt) Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
* - ``use_syslog_rfc_format`` = ``True``
- (BoolOpt) (Optional) Enables or disables syslog rfc5424 format for logging. If enabled, prefixes the MSG part of the syslog message with APP-NAME (RFC5424). The format without the APP-NAME is deprecated in Kilo, and will be removed in Mitaka, along with this option. This option is ignored if log_config_append is set.
- (BoolOpt) Enables or disables syslog rfc5424 format for logging. If enabled, prefixes the MSG part of the syslog message with APP-NAME (RFC5424). This option is ignored if log_config_append is set.
* - ``verbose`` = ``True``
- (BoolOpt) If set to false, will disable INFO logging level, making WARNING the default.
- (BoolOpt) If set to false, the logging level will be set to WARNING instead of the default INFO level.
* - ``watch_log_file`` = ``False``
- (BoolOpt) (Optional) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log-file option is specified and Linux platform is used. This option is ignored if log_config_append is set.
- (BoolOpt) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.

View File

@ -67,7 +67,7 @@
* - ``force_dhcp_release`` = ``True``
- (BoolOpt) If True, send a dhcp release on instance termination
* - ``force_snat_range`` = ``[]``
- (MultiStrOpt) Traffic to this range will always be snatted to the fallback ip, even if it would normally be bridged out of the node. Can be specified multiple times.
- (MultiStrOpt) Traffic to this range will always be snatted to the fallback IP, even if it would normally be bridged out of the node. Can be specified multiple times.
* - ``forward_bridge_interface`` = ``['all']``
- (MultiStrOpt) An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times.
* - ``gateway`` = ``None``

View File

@ -22,24 +22,10 @@
- (StrOpt) Default tenant id when creating neutron networks
* - **[neutron]**
-
* - ``admin_auth_url`` = ``http://localhost:5000/v2.0``
- (StrOpt) Authorization URL for connecting to neutron in admin context. DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``admin_password`` = ``None``
- (StrOpt) Password for connecting to neutron in admin context DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``admin_tenant_id`` = ``None``
- (StrOpt) Tenant id for connecting to neutron in admin context DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``admin_tenant_name`` = ``None``
- (StrOpt) Tenant name for connecting to neutron in admin context. This option will be ignored if neutron_admin_tenant_id is set. Note that with Keystone V3 tenant names are only unique within a domain. DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``admin_user_id`` = ``None``
- (StrOpt) User id for connecting to neutron in admin context. DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``admin_username`` = ``None``
- (StrOpt) Username for connecting to neutron in admin context DEPRECATED: specify an auth_plugin and appropriate credentials instead.
* - ``auth_plugin`` = ``None``
- (StrOpt) Name of the plugin to load
* - ``auth_section`` = ``None``
- (StrOpt) Config Section from which to load plugin specific options
* - ``auth_strategy`` = ``keystone``
- (StrOpt) Authorization strategy for connecting to neutron in admin context. DEPRECATED: specify an auth_plugin and appropriate credentials instead. If an auth_plugin is specified strategy will be ignored.
- (Opt) Config Section from which to load plugin specific options
* - ``auth_type`` = ``None``
- (Opt) Authentication type to load
* - ``cafile`` = ``None``
- (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
* - ``certfile`` = ``None``
@ -53,7 +39,7 @@
* - ``metadata_proxy_shared_secret`` =
- (StrOpt) Shared secret to validate proxies Neutron metadata requests
* - ``ovs_bridge`` = ``br-int``
- (StrOpt) Name of Integration Bridge used by Open vSwitch
- (StrOpt) Default OVS bridge name to use if not specified by Neutron
* - ``region_name`` = ``None``
- (StrOpt) Region name for connecting to neutron in admin context
* - ``service_metadata_proxy`` = ``False``

View File

@ -19,6 +19,6 @@
* - **[DEFAULT]**
-
* - ``pci_alias`` = ``[]``
- (MultiStrOpt) An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra_spec for a flavor, without needing to repeat all the PCI property requirements. For example: pci_alias = { "name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "ACCEL" } defines an alias for the Intel QuickAssist card. (multi valued)
- (MultiStrOpt) An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra_spec for a flavor, without needing to repeat all the PCI property requirements. For example: pci_alias = { "name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI" } defines an alias for the Intel QuickAssist card. (multi valued).
* - ``pci_passthrough_whitelist`` = ``[]``
- (MultiStrOpt) White list of PCI devices available to VMs. For example: pci_passthrough_whitelist = [{"vendor_id": "8086", "product_id": "0443"}]

View File

@ -32,15 +32,17 @@
- (IntOpt) The maximum number of items returned in a single response from a collection resource
* - ``password_length`` = ``12``
- (IntOpt) Length of generated instance admin passwords
* - ``policy_default_rule`` = ``default``
- (StrOpt) Default rule. Enforced when a requested rule is not found.
* - ``policy_dirs`` = ``['policy.d']``
- (MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
* - ``policy_file`` = ``policy.json``
- (StrOpt) The JSON file that defines policies.
* - ``reservation_expire`` = ``86400``
- (IntOpt) Number of seconds until a reservation expires
* - ``resize_fs_using_block_device`` = ``False``
- (BoolOpt) Attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).
* - ``until_refresh`` = ``0``
- (IntOpt) Count of reservations until usage is refreshed. This defaults to 0(off) to avoid additional load but it is useful to turn on to help keep quota usage up to date and reduce the impact of out of sync usage issues.
* - **[oslo_policy]**
-
* - ``policy_default_rule`` = ``default``
- (StrOpt) Default rule. Enforced when a requested rule is not found.
* - ``policy_dirs`` = ``['policy.d']``
- (MultiStrOpt) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
* - ``policy_file`` = ``policy.json``
- (StrOpt) The JSON file that defines policies.

View File

@ -1,48 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-qpid:
.. list-table:: Description of Qpid configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[oslo_messaging_qpid]**
-
* - ``amqp_auto_delete`` = ``False``
- (BoolOpt) Auto-delete queues in AMQP.
* - ``amqp_durable_queues`` = ``False``
- (BoolOpt) Use durable queues in AMQP.
* - ``qpid_heartbeat`` = ``60``
- (IntOpt) Seconds between connection keepalive heartbeats.
* - ``qpid_hostname`` = ``localhost``
- (StrOpt) Qpid broker hostname.
* - ``qpid_hosts`` = ``$qpid_hostname:$qpid_port``
- (ListOpt) Qpid HA cluster host:port pairs.
* - ``qpid_password`` =
- (StrOpt) Password for Qpid connection.
* - ``qpid_port`` = ``5672``
- (IntOpt) Qpid broker port.
* - ``qpid_protocol`` = ``tcp``
- (StrOpt) Transport to use, either 'tcp' or 'ssl'.
* - ``qpid_receiver_capacity`` = ``1``
- (IntOpt) The number of prefetched messages held by receiver.
* - ``qpid_sasl_mechanisms`` =
- (StrOpt) Space separated list of SASL mechanisms to use for auth.
* - ``qpid_tcp_nodelay`` = ``True``
- (BoolOpt) Whether to disable the Nagle algorithm.
* - ``qpid_topology_version`` = ``1``
- (IntOpt) The qpid topology version to use. Version 1 is what was originally used by impl_qpid. Version 2 includes some backwards-incompatible changes that allow broker federation to work. Users should update to version 2 when they are able to take everything down, as it requires a clean break.
* - ``qpid_username`` =
- (StrOpt) Username for Qpid connection.
* - ``send_single_reply`` = ``False``
- (BoolOpt) Send a single AMQP reply to call message. The current behaviour since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other have finish to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with a new installations or for testing. Please note, that this option will be removed in the Mitaka release.

View File

@ -57,4 +57,4 @@
* - **[cells]**
-
* - ``bandwidth_update_interval`` = ``600``
- (IntOpt) Seconds between bandwidth updates for cells.
- (IntOpt) Bandwidth update interval Seconds between bandwidth usage cache updates for cells. Possible values: * Time in seconds. Services which consume this: * nova-compute Related options: * None

View File

@ -28,10 +28,14 @@
- (IntOpt) How often times during the heartbeat_timeout_threshold we check the heartbeat.
* - ``heartbeat_timeout_threshold`` = ``60``
- (IntOpt) Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL
* - ``kombu_compression`` = ``None``
- (StrOpt) EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may notbe available in future versions.
* - ``kombu_failover_strategy`` = ``round-robin``
- (StrOpt) Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
* - ``kombu_missing_consumer_retry_timeout`` = ``60``
- (IntOpt) How long to wait a missing client beforce abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
* - ``kombu_reconnect_delay`` = ``1.0``
- (FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
* - ``kombu_reconnect_timeout`` = ``60``
- (IntOpt) How long to wait before considering a reconnect attempt to have failed. This value should not be longer than rpc_response_timeout.
* - ``kombu_ssl_ca_certs`` =
- (StrOpt) SSL certification authority file (valid only if SSL enabled).
* - ``kombu_ssl_certfile`` =
@ -46,6 +50,8 @@
- (StrOpt) The RabbitMQ broker address where a single node is used.
* - ``rabbit_hosts`` = ``$rabbit_host:$rabbit_port``
- (ListOpt) RabbitMQ HA cluster host:port pairs.
* - ``rabbit_interval_max`` = ``30``
- (IntOpt) Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
* - ``rabbit_login_method`` = ``AMQPLAIN``
- (StrOpt) The RabbitMQ login method.
* - ``rabbit_max_retries`` = ``0``
@ -53,16 +59,18 @@
* - ``rabbit_password`` = ``guest``
- (StrOpt) The RabbitMQ password.
* - ``rabbit_port`` = ``5672``
- (IntOpt) The RabbitMQ broker port where a single node is used.
- (PortOpt) The RabbitMQ broker port where a single node is used.
* - ``rabbit_qos_prefetch_count`` = ``0``
- (IntOpt) Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
* - ``rabbit_retry_backoff`` = ``2``
- (IntOpt) How long to backoff for between retries when connecting to RabbitMQ.
* - ``rabbit_retry_interval`` = ``1``
- (IntOpt) How frequently to retry connecting with RabbitMQ.
* - ``rabbit_transient_queues_ttl`` = ``600``
- (IntOpt) Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.
* - ``rabbit_use_ssl`` = ``False``
- (BoolOpt) Connect over SSL for RabbitMQ.
* - ``rabbit_userid`` = ``guest``
- (StrOpt) The RabbitMQ userid.
* - ``rabbit_virtual_host`` = ``/``
- (StrOpt) The RabbitMQ virtual host.
* - ``send_single_reply`` = ``False``
- (BoolOpt) Send a single AMQP reply to call message. The current behaviour since oslo-incubator is to send two AMQP replies - first one with the payload, a second one to ensure the other have finish to send the payload. We are going to remove it in the N release, but we must keep backward compatible at the same time. This option provides such compatibility - it defaults to False in Liberty and can be turned on for early adopters with a new installations or for testing. Please note, that this option will be removed in the Mitaka release.

View File

@ -16,17 +16,21 @@
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``password`` =
- (StrOpt) Password for Redis server (optional).
* - ``port`` = ``6379``
- (IntOpt) Use this port to connect to redis host.
* - **[matchmaker_redis]**
-
* - ``check_timeout`` = ``20000``
- (IntOpt) Time in ms to wait before the transaction is killed.
* - ``host`` = ``127.0.0.1``
- (StrOpt) Host to locate redis.
* - ``password`` =
- (StrOpt) Password for Redis server (optional).
* - ``port`` = ``6379``
- (IntOpt) Use this port to connect to redis host.
- (PortOpt) Use this port to connect to redis host.
* - ``sentinel_group_name`` = ``oslo-messaging-zeromq``
- (StrOpt) Redis replica set name.
* - ``sentinel_hosts`` =
- (ListOpt) List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ]
* - ``socket_timeout`` = ``1000``
- (IntOpt) Timeout in ms on blocking socket operations
* - ``wait_timeout`` = ``500``
- (IntOpt) Time in ms to wait between connection attempts.

View File

@ -18,10 +18,12 @@
- Description
* - **[DEFAULT]**
-
* - ``notification_format`` = ``both``
- (StrOpt) Specifies which notification format shall be used by nova.
* - ``rpc_backend`` = ``rabbit``
- (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include qpid and zmq.
* - ``rpc_cast_timeout`` = ``30``
- (IntOpt) Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
- (StrOpt) The messaging driver to use, defaults to rabbit. Other drivers include amqp and zmq.
* - ``rpc_cast_timeout`` = ``-1``
- (IntOpt) Seconds to wait before a cast expires (TTL). The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Only supported by impl_zmq.
* - ``rpc_conn_pool_size`` = ``30``
- (IntOpt) Size of RPC connection pool.
* - ``rpc_poll_timeout`` = ``1``
@ -31,7 +33,7 @@
* - **[cells]**
-
* - ``rpc_driver_queue_base`` = ``cells.intercell``
- (StrOpt) Base queue name to use when communicating between cells. Various topics by message type will be appended to this.
- (StrOpt) RPC driver queue base When sending a message to another cell by JSON-ifying the message and making an RPC cast to 'process_message', a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this. Possible values: * The base queue name to be used when communicating between cells. Services which consume this: * nova-cells Related options: * None
* - **[oslo_concurrency]**
-
* - ``disable_process_locking`` = ``False``

View File

@ -18,8 +18,6 @@
- Description
* - **[DEFAULT]**
-
* - ``buckets_path`` = ``$state_path/buckets``
- (StrOpt) Path to S3 buckets
* - ``image_decryption_dir`` = ``/tmp``
- (StrOpt) Parent directory for tempdir used for image decryption
* - ``s3_access_key`` = ``notchecked``
@ -28,10 +26,6 @@
- (BoolOpt) Whether to affix the tenant id to the access key when downloading from S3
* - ``s3_host`` = ``$my_ip``
- (StrOpt) Hostname or IP for OpenStack to use when accessing the S3 api
* - ``s3_listen`` = ``0.0.0.0``
- (StrOpt) IP address for S3 API to listen
* - ``s3_listen_port`` = ``3333``
- (IntOpt) Port for S3 API to listen
* - ``s3_port`` = ``3333``
- (IntOpt) Port used when accessing the S3 api
* - ``s3_secret_key`` = ``notchecked``

View File

@ -19,82 +19,88 @@
* - **[DEFAULT]**
-
* - ``aggregate_image_properties_isolation_namespace`` = ``None``
- (StrOpt) Force the filter to consider only keys matching the given namespace.
- (StrOpt) Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. Valid values are strings. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: aggregate_image_properties_isolation_separator
* - ``aggregate_image_properties_isolation_separator`` = ``.``
- (StrOpt) The separator used between the namespace and keys
- (StrOpt) When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. It defaults to a period ('.'). Valid values are strings. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'aggregate_image_properties_isolation' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: aggregate_image_properties_isolation_namespace
* - ``baremetal_scheduler_default_filters`` = ``RetryFilter, AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ExactRamFilter, ExactDiskFilter, ExactCoreFilter``
- (ListOpt) Which filter class names to use for filtering baremetal hosts when not specified in the request.
- (ListOpt) This option specifies the filters used for filtering baremetal hosts. The value should be a list of strings, with each string being the name of a filter class to be used. When used, they will be applied in order, so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: If the 'scheduler_use_baremetal_filters' option is False, this option has no effect.
* - ``cpu_allocation_ratio`` = ``0.0``
- (FloatOpt) Virtual CPU to physical CPU allocation ratio which affects all CPU filters. This configuration specifies a global ratio for CoreFilter. For AggregateCoreFilter, it will fall back to this configuration value if no per-aggregate setting found. NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) will be used and defaulted to 16.0
* - ``disk_allocation_ratio`` = ``1.0``
- (FloatOpt) Virtual disk to physical disk allocation ratio
- (FloatOpt) This is the virtual disk to physical disk allocation ratio used by the disk_filter.py script to determine if a host has sufficient disk space to fit a requested instance. A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk,such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances
* - ``disk_weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.
* - ``io_ops_weight_multiplier`` = ``-1.0``
- (FloatOpt) Multiplier used for weighing host io ops. Negative numbers mean a preference to choose light workload compute hosts.
- (FloatOpt) This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops' weigher is enabled. Valid values are numeric, either integer or float. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``isolated_hosts`` =
- (ListOpt) Host reserved for specific images
- (ListOpt) If there is a need to restrict some images to only run on certain designated hosts, list those host names here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: scheduler/isolated_images scheduler/restrict_isolated_hosts_to_isolated_images
* - ``isolated_images`` =
- (ListOpt) Images to run on isolated host
- (ListOpt) If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: scheduler/isolated_hosts scheduler/restrict_isolated_hosts_to_isolated_images
* - ``max_instances_per_host`` = ``50``
- (IntOpt) Ignore hosts that have too many instances
- (IntOpt) If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The num_instances_filter will reject any host that has at least as many instances as this option's value. Valid values are positive integers; setting it to zero will cause all hosts to be rejected if the num_instances_filter is active. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'num_instances_filter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``max_io_ops_per_host`` = ``8``
- (IntOpt) Tells filters to ignore hosts that have this many or more instances currently in build, resize, snapshot, migrate, rescue or unshelve task states
- (IntOpt) This setting caps the number of instances on a host that can be actively performing IO (in a build, resize, snapshot, migrate, rescue, or unshelve task state) before that host becomes ineligible to build new instances. Valid values are positive integers: 1 or greater. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'io_ops_filter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``ram_allocation_ratio`` = ``0.0``
- (FloatOpt) Virtual ram to physical ram allocation ratio which affects all ram filters. This configuration specifies a global ratio for RamFilter. For AggregateRamFilter, it will fall back to this configuration value if no per-aggregate setting found. NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) will be used and defaulted to 1.5
* - ``ram_weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used for weighing ram. Negative numbers mean to stack vs spread.
- (FloatOpt) This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'ram' weigher is enabled. Valid values are numeric, either integer or float. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``reserved_host_disk_mb`` = ``0``
- (IntOpt) Amount of disk in MB to reserve for the host
* - ``reserved_host_memory_mb`` = ``512``
- (IntOpt) Amount of memory in MB to reserve for the host
* - ``restrict_isolated_hosts_to_isolated_images`` = ``True``
- (BoolOpt) Whether to force isolated hosts to run only isolated images
- (BoolOpt) This setting determines if the scheduler's isolated_hosts filter will allow non-isolated images on a host designated as an isolated host. When set to True (the default), non-isolated images will not be allowed to be built on isolated hosts. When False, non-isolated images can be built on both isolated and non-isolated hosts alike. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even then, this option doesn't affect the behavior of requests for isolated images, which will *always* be restricted to isolated hosts. * Services that use this: ``nova-scheduler`` * Related options: scheduler/isolated_images scheduler/isolated_hosts
* - ``scheduler_available_filters`` = ``['nova.scheduler.filters.all_filters']``
- (MultiStrOpt) Filter classes available to the scheduler which may be specified more than once. An entry of "nova.scheduler.filters.all_filters" maps to all filters included with nova.
- (MultiStrOpt) This is an unordered list of the filter classes the Nova scheduler may apply. Only the filters specified in the 'scheduler_default_filters' option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a filter. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: scheduler_default_filters
* - ``scheduler_default_filters`` = ``RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter``
- (ListOpt) Which filter class names to use for filtering hosts when not specified in the request.
* - ``scheduler_driver`` = ``nova.scheduler.filter_scheduler.FilterScheduler``
- (StrOpt) Default driver to use for the scheduler
- (ListOpt) This option is the list of filter class names that will be used for filtering hosts. The use of 'default' in the name of this option implies that other filters may sometimes be used, but that is not the case. These filters will be applied in the order they are listed, so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: All of the filters in this option *must* be present in the 'scheduler_available_filters' option, or a SchedulerHostFilterNotFound exception will be raised.
* - ``scheduler_driver`` = ``filter_scheduler``
- (StrOpt) The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace 'nova.scheduler.driver' of file 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is used. This option also supports deprecated full Python path to the class to be used. For example, "nova.scheduler.filter_scheduler.FilterScheduler". But note: this support will be dropped in the N Release. Other options are: * 'caching_scheduler' which aggressively caches the system state for better individual scheduler performance at the risk of more retries when running multiple schedulers. * 'chance_scheduler' which simply picks a host at random. * 'fake_scheduler' which is used for testing. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_driver_task_period`` = ``60``
- (IntOpt) How often (in seconds) to run periodic tasks in the scheduler driver of your choice. Please note this is likely to interact with the value of service_down_time, but exactly how they interact will depend on your choice of scheduler driver.
* - ``scheduler_host_manager`` = ``nova.scheduler.host_manager.HostManager``
- (StrOpt) The scheduler host manager class to use
- (IntOpt) This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. If this is larger than the nova-service 'service_down_time' setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler. * Services that use this: ``nova-scheduler`` * Related options: ``nova-service service_down_time``
* - ``scheduler_host_manager`` = ``host_manager``
- (StrOpt) The scheduler host manager to use, which manages the in-memory picture of the hosts that the scheduler uses. The option value should be chosen from one of the entrypoints under the namespace 'nova.scheduler.host_manager' of file 'setup.cfg'. For example, 'host_manager' is the default setting. Aside from the default, the only other option as of the Mitaka release is 'ironic_host_manager', which should be used if you're using Ironic to provision bare-metal instances. This option also supports a full class path style, for example "nova.scheduler.host_manager.HostManager", but note this support is deprecated and will be dropped in the N release. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_host_subset_size`` = ``1``
- (IntOpt) New instances will be scheduled on a host chosen randomly from a subset of the N best hosts. This property defines the subset size that a host is chosen from. A value of 1 chooses the first host returned by the weighing functions. This value must be at least 1. Any value less than 1 will be ignored, and 1 will be used instead
- (IntOpt) New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Valid values are 1 or greater. Any value less than one will be treated as 1. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_instance_sync_interval`` = ``120``
- (IntOpt) Waiting time interval (seconds) between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option `scheduler_tracks_instance_changes` is False, changing this option will have no effect.
* - ``scheduler_json_config_location`` =
- (StrOpt) Absolute path to scheduler configuration JSON file.
- (StrOpt) The absolute path to the scheduler configuration JSON file, if any. This file location is monitored by the scheduler for changes and reloads it if needed. It is converted from JSON to a Python data structure, and passed into the filtering and weighing functions of the scheduler, which can use it for dynamic configuration. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_manager`` = ``nova.scheduler.manager.SchedulerManager``
- (StrOpt) Full class name for the Manager for scheduler
* - ``scheduler_max_attempts`` = ``3``
- (IntOpt) Maximum number of attempts to schedule an instance
- (IntOpt) This is the maximum number of attempts that will be made to schedule an instance before it is assumed that the failures aren't due to normal occasional race conflicts, but rather some other problem. When this is reached a MaxRetriesExceeded exception is raised, and the instance is set to an error state. Valid values are positive integers (1 or greater). * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_topic`` = ``scheduler``
- (StrOpt) The topic scheduler nodes listen on
- (StrOpt) This is the message queue topic that the scheduler 'listens' on. It is used when the scheduler service is started up to configure the queue, and whenever an RPC call to the scheduler is made. There is almost never any reason to ever change this value. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_tracks_instance_changes`` = ``True``
- (BoolOpt) Determines if the Scheduler tracks changes to instances to help with its filtering decisions.
- (BoolOpt) The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``scheduler_use_baremetal_filters`` = ``False``
- (BoolOpt) Flag to decide whether to use baremetal_scheduler_default_filters or not.
- (BoolOpt) Set this to True to tell the nova scheduler that it should use the filters specified in the 'baremetal_scheduler_default_filters' option. If you are not scheduling baremetal nodes, leave this at the default setting of False. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: If this option is set to True, then the filters specified in the 'baremetal_scheduler_default_filters' are used instead of the filters specified in 'scheduler_default_filters'.
* - ``scheduler_weight_classes`` = ``nova.scheduler.weights.all_weighers``
- (ListOpt) Which weight class names to use for weighing hosts
- (ListOpt) This is a list of weigher class names. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is 'scheduler_host_subset_size'. By default, this is set to all weighers that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a weigher. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: None
* - ``soft_affinity_weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity.
* - ``soft_anti_affinity_weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity.
* - **[cells]**
-
* - ``ram_weight_multiplier`` = ``10.0``
- (FloatOpt) Multiplier used for weighing ram. Negative numbers mean to stack vs spread.
- (FloatOpt) Ram weight multiplier Multiplier used for weighing ram. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell. Possible values: * Numeric multiplier Services which consume this: * nova-cells Related options: * None
* - ``scheduler_filter_classes`` = ``nova.cells.filters.all_filters``
- (ListOpt) Filter classes the cells scheduler should use. An entry of "nova.cells.filters.all_filters" maps to all cells filters included with nova.
- (ListOpt) Scheduler filter classes Filter classes the cells scheduler should use. An entry of "nova.cells.filters.all_filters" maps to all cells filters included with nova. As of the Mitaka release the following filter classes are available: Different cell filter: A scheduler hint of 'different_cell' with a value of a full cell name may be specified to route a build away from a particular cell. Image properties filter: Image metadata named 'hypervisor_version_requires' with a version specification may be specified to ensure the build goes to a cell which has hypervisors of the required version. If either the version requirement on the image or the hypervisor capability of the cell is not present, this filter returns without filtering out the cells. Target cell filter: A scheduler hint of 'target_cell' with a value of a full cell name may be specified to route a build to a particular cell. No error handling is done as there's no way to know whether the full path is a valid. As an admin user, you can also add a filter that directs builds to a particular cell. Possible values: * 'nova.cells.filters.all_filters' is the default option * Otherwise it should be the full Python path to the class to be used Services which consume this: * nova-cells Related options: * None
* - ``scheduler_retries`` = ``10``
- (IntOpt) How many retries when no cells are available.
- (IntOpt) Scheduler retries How many retries when no cells are available. Specifies how many times the scheduler tries to launch a new instance when no cells are available. Possible values: * Positive integer value Services which consume this: * nova-cells Related options: * This value is used with the ``scheduler_retry_delay`` value while retrying to find a suitable cell.
* - ``scheduler_retry_delay`` = ``2``
- (IntOpt) How often to retry in seconds when no cells are available.
- (IntOpt) Scheduler retry delay Specifies the delay (in seconds) between scheduling retries when no cell can be found to place the new instance on. When the instance could not be scheduled to a cell after ``scheduler_retries`` in combination with ``scheduler_retry_delay``, then the scheduling of the instance failed. Possible values: * Time in seconds. Services which consume this: * nova-cells Related options: * This value is used with the ``scheduler_retries`` value while retrying to find a suitable cell.
* - ``scheduler_weight_classes`` = ``nova.cells.weights.all_weighers``
- (ListOpt) Weigher classes the cells scheduler should use. An entry of "nova.cells.weights.all_weighers" maps to all cell weighers included with nova.
- (ListOpt) Scheduler weight classes Weigher classes the cells scheduler should use. An entry of "nova.cells.weights.all_weighers" maps to all cell weighers included with nova. As of the Mitaka release the following weight classes are available: mute_child: Downgrades the likelihood of child cells being chosen for scheduling requests, which haven't sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative). ram_by_instance_type: Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell. weight_offset: Allows modifying the database to weight a particular cell. The highest weight will be the first cell to be scheduled for launching an instance. When the weight_offset of a cell is set to 0, it is unlikely to be picked but it could be picked if other cells have a lower weight, like if they're full. And when the weight_offset is set to a very high value (for example, '999999999999999'), it is likely to be picked if another cell do not have a higher weight. Possible values: * 'nova.cells.weights.all_weighers' is the default option * Otherwise it should be the full Python path to the class to be used Services which consume this: * nova-cells Related options: * None
* - **[metrics]**
-
* - ``required`` = ``True``
- (BoolOpt) How to treat the unavailable metrics. When a metric is NOT available for a host, if it is set to be True, it would raise an exception, so it is recommended to use the scheduler filter MetricFilter to filter out those hosts. If it is set to be False, the unavailable metric would be treated as a negative factor in weighing process, the returned value would be set by the option weight_of_unavailable.
- (BoolOpt) This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. When this option is False, any metric being unavailable for a host will set the host weight to 'weight_of_unavailable'. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: weight_of_unavailable
* - ``weight_multiplier`` = ``1.0``
- (FloatOpt) Multiplier used for weighing metrics.
- (FloatOpt) When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows: * Greater than 1.0: increases the effect of the metric on overall weight. * Equal to 1.0: No change to the calculated weight. * Less than 1.0, greater than 0: reduces the effect of the metric on overall weight. * 0: The metric value is ignored, and the value of the 'weight_of_unavailable' option is returned instead. * Greater than -1.0, less than 0: the effect is reduced and reversed. * -1.0: the effect is reversed * Less than -1.0: the effect is increased proportionally and reversed. Valid values are numeric, either integer or float. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: weight_of_unavailable
* - ``weight_of_unavailable`` = ``-10000.0``
- (FloatOpt) The final weight value to be returned if required is set to False and any one of the metrics set by weight_setting is unavailable.
- (FloatOpt) When any of the following conditions are met, this value will be used in place of any actual metric value: * One of the metrics named in 'weight_setting' is not available for a host, and the value of 'required' is False. * The ratio specified for a metric in 'weight_setting' is 0. * The 'weight_multiplier' option is set to 0. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: weight_setting required weight_multiplier
* - ``weight_setting`` =
- (ListOpt) How the metrics are going to be weighed. This should be in the form of "<name1>=<ratio1>, <name2>=<ratio2>, ...", where <nameX> is one of the metrics to be weighed, and <ratioX> is the corresponding ratio. So for "name1=1.0, name2=-1.0" The final weight would be name1.value * 1.0 + name2.value * -1.0.
- (ListOpt) This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more 'name=ratio' pairs, separated by commas, where 'name' is the name of the metric to be weighed, and 'ratio' is the relative weight for that metric. Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the 'weight_of_unavailable' option. As an example, let's consider the case where this option is set to: ``name1=1.0, name2=-1.3`` The final weight will be: ``(name1.value * 1.0) + (name2.value * -1.3)`` This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. * Services that use this: ``nova-scheduler`` * Related options: weight_of_unavailable

View File

@ -24,5 +24,5 @@
- (BoolOpt) If passed, use fake network devices and addresses
* - ``monkey_patch`` = ``False``
- (BoolOpt) Whether to apply monkey patching
* - ``monkey_patch_modules`` = ``nova.api.ec2.cloud:nova.notifications.notify_decorator, nova.compute.api:nova.notifications.notify_decorator``
* - ``monkey_patch_modules`` = ``nova.compute.api:nova.notifications.notify_decorator``
- (ListOpt) List of modules/decorators to monkey patch

View File

@ -19,16 +19,16 @@
* - **[trusted_computing]**
-
* - ``attestation_api_url`` = ``/OpenAttestationWebServices/V1.0``
- (StrOpt) Attestation web API URL
- (StrOpt) The URL on the attestation server to use. See the `attestation_server` help text for more information about host verification. This value must be just that path portion of the full URL, as it will be joined to the host specified in the attestation_server option. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_server_ca_file attestation_port attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
* - ``attestation_auth_blob`` = ``None``
- (StrOpt) Attestation authorization blob - must change
- (StrOpt) Attestation servers require a specific blob that is used to authenticate. The content and format of the blob are determined by the particular attestation server being used. There is no default value; you must supply the value as specified by your attestation service. See the `attestation_server` help text for more information about host verification. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_timeout attestation_insecure_ssl
* - ``attestation_auth_timeout`` = ``60``
- (IntOpt) Attestation status cache valid period length
- (IntOpt) This value controls how long a successful attestation is cached. Once this period has elapsed, a new attestation request will be made. See the `attestation_server` help text for more information about host verification. The value is in seconds. Valid values must be positive integers for any caching; setting this to zero or a negative value will result in calls to the attestation_server for every request, which may impact performance. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_insecure_ssl
* - ``attestation_insecure_ssl`` = ``False``
- (BoolOpt) Disable SSL cert verification for Attestation service
- (BoolOpt) When set to True, the SSL certificate verification is skipped for the attestation service. See the `attestation_server` help text for more information about host verification. Valid values are True or False. The default is False. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout
* - ``attestation_port`` = ``8443``
- (StrOpt) Attestation server port
- (StrOpt) The port to use when connecting to the attestation server. See the `attestation_server` help text for more information about host verification. Valid values are strings, not integers, but must be digits only. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_server_ca_file attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
* - ``attestation_server`` = ``None``
- (StrOpt) Attestation server HTTP
- (StrOpt) The host to use as the attestation server. Cloud computing pools can involve thousands of compute nodes located at different geographical locations, making it difficult for cloud providers to identify a node's trustworthiness. When using the Trusted filter, users can request that their VMs only be placed on nodes that have been verified by the attestation server specified in this option. The value is a string, and can be either an IP address or FQDN. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
* - ``attestation_server_ca_file`` = ``None``
- (StrOpt) Attestation server Cert file for Identity verification
- (StrOpt) The absolute path to the certificate to use for authentication when connecting to the attestation server. See the `attestation_server` help text for more information about host verification. The value is a string, and must point to a file that is readable by the scheduler. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the 'TrustedFilter' filter is enabled. * Services that use this: ``nova-scheduler`` * Related options: attestation_server attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl

View File

@ -19,15 +19,15 @@
* - **[cells]**
-
* - ``scheduler`` = ``nova.cells.scheduler.CellsScheduler``
- (StrOpt) Cells scheduler to use
- (StrOpt) Cells scheduler The class of the driver used by the cells scheduler. This should be the full Python path to the class to be used. If nothing is specified in this option, the CellsScheduler is used. Possible values: * 'nova.cells.scheduler.CellsScheduler' is the default option * Otherwise it should be the full Python path to the class to be used Services which consume this: * nova-cells Related options: * None
* - **[upgrade_levels]**
-
* - ``cells`` = ``None``
- (StrOpt) Set a version cap for messages sent to local cells services
* - ``cert`` = ``None``
- (StrOpt) Set a version cap for messages sent to cert services
- (StrOpt) Specifies the maximum version for messages sent from cert services. This should be the minimum value that is supported by all of the deployed cert services. Possible values: Any valid OpenStack release name, in lower case, such as 'mitaka' or 'liberty'. Alternatively, it can be any string representing a version number in the format 'N.N'; for example, possible values might be '1.12' or '2.0'. * Services which consume this: ``nova-cert`` * Related options: None
* - ``compute`` = ``None``
- (StrOpt) Set a version cap for messages sent to compute services. If you plan to do a live upgrade from an old version to a newer version, you should set this option to the old version before beginning the live upgrade procedure. Only upgrading to the next version is supported, so you cannot skip a release for the live upgrade procedure.
- (StrOpt) Set a version cap for messages sent to compute services. Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Otherwise, you can set this to a specific version to pin this service to messages at a particular level. All services of a single type (i.e. compute) should be configured to use the same version, and it should be set to the minimum commonly-supported version of all those services in the deployment.
* - ``conductor`` = ``None``
- (StrOpt) Set a version cap for messages sent to conductor services
* - ``console`` = ``None``
@ -39,4 +39,4 @@
* - ``network`` = ``None``
- (StrOpt) Set a version cap for messages sent to network services
* - ``scheduler`` = ``None``
- (StrOpt) Set a version cap for messages sent to scheduler services
- (StrOpt) Sets a version cap (limit) for messages sent to scheduler services. In the situation where there were multiple scheduler services running, and they were not being upgraded together, you would set this to the lowest deployed version to guarantee that other services never send messages that any of your running schedulers cannot understand. This is rarely needed in practice as most deployments run a single scheduler. It exists mainly for design compatibility with the other services, such as compute, which are routinely upgraded in a rolling fashion. * Services that use this: ``nova-compute, nova-conductor`` * Related options: None

View File

@ -22,10 +22,6 @@
- (BoolOpt) Become a daemon (background process)
* - ``key`` = ``None``
- (StrOpt) SSL key file (if separate from cert)
* - ``novncproxy_host`` = ``0.0.0.0``
- (StrOpt) Host on which to listen for incoming requests
* - ``novncproxy_port`` = ``6080``
- (IntOpt) Port on which to listen for incoming requests
* - ``record`` = ``False``
- (BoolOpt) Record sessions to FILE.[session_number]
* - ``source_is_ipv6`` = ``False``
@ -43,14 +39,22 @@
* - **[vnc]**
-
* - ``enabled`` = ``True``
- (BoolOpt) Enable VNC related features
- (BoolOpt) Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. Possible values: * True: Enables the feature * False: Disables the feature Services which consume this: * ``nova-compute`` Related options: * None
* - ``keymap`` = ``en-us``
- (StrOpt) Keymap for VNC
- (StrOpt) Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values: * A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an 'IETF language tag' (for example 'en-us'). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at ``/usr/share/qemu/keymaps``. Services which consume this: * ``nova-compute`` Related options: * None
* - ``novncproxy_base_url`` = ``http://127.0.0.1:6080/vnc_auto.html``
- (StrOpt) Location of VNC console proxy, in the form "http://127.0.0.1:6080/vnc_auto.html"
- (StrOpt) Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions. Possible values: * A URL Services which consume this: * ``nova-compute`` Related options: * novncproxy_host * novncproxy_port
* - ``novncproxy_host`` = ``0.0.0.0``
- (StrOpt) IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Possible values: * An IP address Services which consume this: * ``nova-compute`` Related options: * novncproxy_port * novncproxy_base_url
* - ``novncproxy_port`` = ``6080``
- (IntOpt) Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Possible values: * A port number Services which consume this: * ``nova-compute`` Related options: * novncproxy_host * novncproxy_base_url
* - ``vncserver_listen`` = ``127.0.0.1``
- (StrOpt) IP address on which instance vncservers should listen
- (StrOpt) The IP address on which an instance should listen to for incoming VNC connection requests on this node. Possible values: * An IP address Services which consume this: * ``nova-compute`` Related options: * None
* - ``vncserver_proxyclient_address`` = ``127.0.0.1``
- (StrOpt) The address to which proxy clients (like nova-xvpvncproxy) should connect
- (StrOpt) Private, internal address of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. This option sets the private address to which proxy clients, such as ``nova-xvpvncproxy``, should connect to. Possible values: * An IP address Services which consume this: * ``nova-compute`` Related options: * None
* - ``xvpvncproxy_base_url`` = ``http://127.0.0.1:6081/console``
- (StrOpt) Location of nova xvp VNC console proxy, in the form "http://127.0.0.1:6081/console"
- (StrOpt) Public address of XVP VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions. Possible values: * A URL Services which consume this: * ``nova-compute`` Related options: * xvpvncproxy_host * xvpvncproxy_port
* - ``xvpvncproxy_host`` = ``0.0.0.0``
- (StrOpt) IP address that the XVP VNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the private address to which the XVP VNC console proxy service should bind to. Possible values: * An IP address Services which consume this: * ``nova-compute`` Related options: * xvpvncproxy_port * xvpvncproxy_base_url
* - ``xvpvncproxy_port`` = ``6081``
- (IntOpt) Port that the XVP VNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the private port to which the XVP VNC console proxy service should bind to. Possible values: * A port number Services which consume this: * ``nova-compute`` Related options: * xvpvncproxy_host * xvpvncproxy_base_url

View File

@ -19,7 +19,7 @@
* - **[DEFAULT]**
-
* - ``block_device_allocate_retries`` = ``60``
- (IntOpt) Number of times to retry block device allocation on failures
- (IntOpt) Number of times to retry block device allocation on failures. Starting with Liberty, Cinder can use image volume cache. This may help with block device allocation performance. Look at the cinder image_volume_cache_enabled configuration option.
* - ``block_device_allocate_retries_interval`` = ``3``
- (IntOpt) Waiting time interval (seconds) between block device allocation retries on failures
* - ``my_block_storage_ip`` = ``$my_ip``

View File

@ -1,24 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-xvpvncproxy:
.. list-table:: Description of XCP VNC proxy configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[DEFAULT]**
-
* - ``xvpvncproxy_host`` = ``0.0.0.0``
- (StrOpt) Address that the XCP VNC proxy should bind to
* - ``xvpvncproxy_port`` = ``6081``
- (IntOpt) Port that the XCP VNC proxy should bind to

View File

@ -18,8 +18,6 @@
- Description
* - **[DEFAULT]**
-
* - ``rpc_zmq_all_req_rep`` = ``True``
- (BoolOpt) Use REQ/REP pattern for all methods CALL/CAST/FANOUT.
* - ``rpc_zmq_bind_address`` = ``*``
- (StrOpt) ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The "host" option should point or resolve to this address.
* - ``rpc_zmq_bind_port_retries`` = ``100``
@ -37,8 +35,10 @@
* - ``rpc_zmq_max_port`` = ``65536``
- (IntOpt) Maximal port number for random ports range.
* - ``rpc_zmq_min_port`` = ``49152``
- (IntOpt) Minimal port number for random ports range.
- (PortOpt) Minimal port number for random ports range.
* - ``rpc_zmq_topic_backlog`` = ``None``
- (IntOpt) Maximum number of ingress messages to locally buffer per topic. Default is unlimited.
* - ``zmq_use_broker`` = ``True``
- (BoolOpt) Shows whether zmq-messaging uses broker or not.
* - ``use_pub_sub`` = ``True``
- (BoolOpt) Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
* - ``zmq_target_expire`` = ``120``
- (IntOpt) Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout).

View File

@ -1,28 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-zookeeper:
.. list-table:: Description of Zookeeper configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[zookeeper]**
-
* - ``address`` = ``None``
- (StrOpt) The ZooKeeper addresses for servicegroup service in the format of host1:port,host2:port,host3:port
* - ``recv_timeout`` = ``4000``
- (IntOpt) The recv_timeout parameter for the zk session
* - ``sg_prefix`` = ``/servicegroups``
- (StrOpt) The prefix used in ZooKeeper to store ephemeral nodes
* - ``sg_retry_interval`` = ``5``
- (IntOpt) Number of seconds to wait until retrying to join the session

View File

@ -13,7 +13,6 @@ bindir common
block_device_allocate_retries volumes
block_device_allocate_retries_interval volumes
boot_script_template vpn
buckets_path s3
ca_file ca
ca_path ca
cert ca
@ -66,6 +65,7 @@ dhcp_lease_time network
dhcpbridge network
dhcpbridge_flagfile network
disk_allocation_ratio scheduler
disk_weight_multiplier scheduler
dmz_cidr vpn
dmz_mask vpn
dmz_net vpn
@ -74,17 +74,6 @@ dns_update_periodic_interval network
dnsmasq_config_file network
ebtables_exec_attempts network
ebtables_retry_interval network
ec2_dmz_host ec2
ec2_host ec2
ec2_listen ec2
ec2_listen_port ec2
ec2_path ec2
ec2_port ec2
ec2_private_dns_show_ip ec2
ec2_scheme ec2
ec2_strict_validation ec2
ec2_timestamp_expiry ec2
ec2_workers ec2
enable_instance_password compute
enable_network_quota quota
enable_new_services api
@ -138,8 +127,6 @@ isolated_images scheduler
key vnc
key_file ca
keys_path ca
keystone_ec2_insecure ec2
keystone_ec2_url ec2
l3_lib network
ldap_dns_base_dn ldap
ldap_dns_password ldap
@ -154,14 +141,10 @@ ldap_dns_user ldap
linuxnet_interface_driver network
linuxnet_ovs_integration_bridge network
live_migration_retry_count livemigration
lockout_attempts ec2
lockout_minutes ec2
lockout_window ec2
log_config_append logging
log_date_format logging
log_dir logging
log_file logging
log_format logging
logging_context_format_string logging
logging_debug_format_suffix logging
logging_default_format_string logging
@ -201,13 +184,9 @@ network_topic network
networks_path network
neutron_default_tenant_id neutron
non_inheritable_image_properties api
notification_driver amqp
notification_topics amqp
notification_transport_url amqp
notification_format rpc
notify_api_faults common
notify_on_state_change common
novncproxy_host vnc
novncproxy_port vnc
null_kernel api
num_networks network
osapi_compute_ext_list api
@ -221,16 +200,11 @@ osapi_glance_link_prefix glance
osapi_hide_server_address_states api
osapi_max_limit policy
ovs_vsctl_timeout network
password redis
password_length policy
pci_alias pci
pci_passthrough_whitelist pci
periodic_enable periodic
periodic_fuzzy_delay periodic
policy_default_rule policy
policy_dirs policy
policy_file policy
port redis
preallocate_images hypervisor
project_cert_subject ca
public_interface network
@ -257,7 +231,6 @@ ram_weight_multiplier scheduler
reboot_timeout compute
reclaim_instance_interval compute
record vnc
region_list ec2
remove_unused_base_images libvirt
remove_unused_original_minimum_age_seconds libvirt
report_interval common
@ -276,7 +249,6 @@ rpc_cast_timeout rpc
rpc_conn_pool_size rpc
rpc_poll_timeout rpc
rpc_response_timeout rpc
rpc_zmq_all_req_rep zeromq
rpc_zmq_bind_address zeromq
rpc_zmq_bind_port_retries zeromq
rpc_zmq_concurrency zeromq
@ -293,8 +265,6 @@ running_deleted_instance_timeout compute
s3_access_key s3
s3_affix_tenant s3
s3_host s3
s3_listen s3
s3_listen_port s3
s3_port s3
s3_secret_key s3
s3_use_ssl s3
@ -323,6 +293,8 @@ shelved_offload_time compute
shelved_poll_interval compute
shutdown_timeout compute
snapshot_name_template api
soft_affinity_weight_multiplier scheduler
soft_anti_affinity_weight_multiplier scheduler
source_is_ipv6 vnc
ssl_ca_file ca
ssl_cert_file ca
@ -346,6 +318,7 @@ use_ipv6 ipv6
use_network_dns_servers network
use_neutron_default_nets network
use_project_ca ca
use_pub_sub zeromq
use_rootwrap_daemon common
use_single_default_gateway network
use_stderr logging
@ -373,9 +346,7 @@ web vnc
wsgi_default_pool_size api
wsgi_keep_alive api
wsgi_log_format api
xvpvncproxy_host xvpvncproxy
xvpvncproxy_port xvpvncproxy
zmq_use_broker zeromq
zmq_target_expire zeromq
api_database/connection database
api_database/connection_debug database
api_database/connection_trace database
@ -396,6 +367,19 @@ barbican/insecure barbican
barbican/keyfile barbican
barbican/os_region_name barbican
barbican/timeout barbican
cache/backend cache
cache/backend_argument cache
cache/config_prefix cache
cache/debug_cache_backend cache
cache/enabled cache
cache/expiration_time cache
cache/memcache_dead_retry cache
cache/memcache_pool_connection_get_timeout cache
cache/memcache_pool_maxsize cache
cache/memcache_pool_unused_timeout cache
cache/memcache_servers cache
cache/memcache_socket_timeout cache
cache/proxies cache
cells/bandwidth_update_interval quota
cells/call_timeout cells
cells/capabilities cells
@ -479,6 +463,7 @@ glance/host glance
glance/num_retries glance
glance/port glance
glance/protocol glance
glance/verify_glance_signatures glance
guestfs/debug debug
hyperv/config_drive_cdrom configdrive
hyperv/config_drive_inject_password configdrive
@ -515,10 +500,10 @@ keystone_authtoken/admin_token auth_token
keystone_authtoken/admin_user auth_token
keystone_authtoken/auth_admin_prefix auth_token
keystone_authtoken/auth_host auth_token
keystone_authtoken/auth_plugin auth_token
keystone_authtoken/auth_port auth_token
keystone_authtoken/auth_protocol auth_token
keystone_authtoken/auth_section auth_token
keystone_authtoken/auth_type auth_token
keystone_authtoken/auth_uri auth_token
keystone_authtoken/auth_version auth_token
keystone_authtoken/cache auth_token
@ -576,7 +561,9 @@ libvirt/live_migration_downtime livemigration
libvirt/live_migration_downtime_delay livemigration
libvirt/live_migration_downtime_steps livemigration
libvirt/live_migration_flag livemigration
libvirt/live_migration_inbound_addr livemigration
libvirt/live_migration_progress_timeout livemigration
libvirt/live_migration_tunnelled livemigration
libvirt/live_migration_uri livemigration
libvirt/mem_stats_period_seconds libvirt
libvirt/nfs_mount_options volumes
@ -589,6 +576,7 @@ libvirt/quobyte_client_cfg quobyte
libvirt/quobyte_mount_point_base quobyte
libvirt/rbd_secret_uuid volumes
libvirt/rbd_user volumes
libvirt/realtime_scheduler_priority libvirt
libvirt/remote_filesystem_transport network
libvirt/remove_unused_kernels libvirt
libvirt/remove_unused_resized_minimum_age_seconds libvirt
@ -613,24 +601,22 @@ libvirt/volume_clear libvirt
libvirt/volume_clear_size libvirt
libvirt/wait_soft_reboot_seconds libvirt
libvirt/xen_hvmloader_path xen
matchmaker_redis/check_timeout redis
matchmaker_redis/host redis
matchmaker_redis/password redis
matchmaker_redis/port redis
matchmaker_redis/sentinel_group_name redis
matchmaker_redis/sentinel_hosts redis
matchmaker_redis/socket_timeout redis
matchmaker_redis/wait_timeout redis
metrics/required scheduler
metrics/weight_multiplier scheduler
metrics/weight_of_unavailable scheduler
metrics/weight_setting scheduler
mks/enabled console
mks/mksproxy_base_url console
neutron/admin_auth_url neutron
neutron/admin_password neutron
neutron/admin_tenant_id neutron
neutron/admin_tenant_name neutron
neutron/admin_user_id neutron
neutron/admin_username neutron
neutron/auth_plugin neutron
neutron/auth_section neutron
neutron/auth_strategy neutron
neutron/auth_type neutron
neutron/cafile neutron
neutron/certfile neutron
neutron/extension_sync_interval neutron
@ -645,6 +631,7 @@ neutron/url neutron
osapi_v21/enabled apiv21
osapi_v21/extensions_blacklist apiv21
osapi_v21/extensions_whitelist apiv21
osapi_v21/project_id_regex apiv21
oslo_concurrency/disable_process_locking rpc
oslo_concurrency/lock_path rpc
oslo_messaging_amqp/allow_insecure_clients rpc
@ -663,27 +650,18 @@ oslo_messaging_amqp/ssl_key_file rpc
oslo_messaging_amqp/ssl_key_password rpc
oslo_messaging_amqp/trace rpc
oslo_messaging_amqp/username rpc
oslo_messaging_qpid/amqp_auto_delete qpid
oslo_messaging_qpid/amqp_durable_queues qpid
oslo_messaging_qpid/qpid_heartbeat qpid
oslo_messaging_qpid/qpid_hostname qpid
oslo_messaging_qpid/qpid_hosts qpid
oslo_messaging_qpid/qpid_password qpid
oslo_messaging_qpid/qpid_port qpid
oslo_messaging_qpid/qpid_protocol qpid
oslo_messaging_qpid/qpid_receiver_capacity qpid
oslo_messaging_qpid/qpid_sasl_mechanisms qpid
oslo_messaging_qpid/qpid_tcp_nodelay qpid
oslo_messaging_qpid/qpid_topology_version qpid
oslo_messaging_qpid/qpid_username qpid
oslo_messaging_qpid/send_single_reply qpid
oslo_messaging_notifications/driver amqp
oslo_messaging_notifications/topics amqp
oslo_messaging_notifications/transport_url amqp
oslo_messaging_rabbit/amqp_auto_delete rabbitmq
oslo_messaging_rabbit/amqp_durable_queues rabbitmq
oslo_messaging_rabbit/fake_rabbit rabbitmq
oslo_messaging_rabbit/heartbeat_rate rabbitmq
oslo_messaging_rabbit/heartbeat_timeout_threshold rabbitmq
oslo_messaging_rabbit/kombu_compression rabbitmq
oslo_messaging_rabbit/kombu_failover_strategy rabbitmq
oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout rabbitmq
oslo_messaging_rabbit/kombu_reconnect_delay rabbitmq
oslo_messaging_rabbit/kombu_reconnect_timeout rabbitmq
oslo_messaging_rabbit/kombu_ssl_ca_certs rabbitmq
oslo_messaging_rabbit/kombu_ssl_certfile rabbitmq
oslo_messaging_rabbit/kombu_ssl_keyfile rabbitmq
@ -691,18 +669,23 @@ oslo_messaging_rabbit/kombu_ssl_version rabbitmq
oslo_messaging_rabbit/rabbit_ha_queues rabbitmq
oslo_messaging_rabbit/rabbit_host rabbitmq
oslo_messaging_rabbit/rabbit_hosts rabbitmq
oslo_messaging_rabbit/rabbit_interval_max rabbitmq
oslo_messaging_rabbit/rabbit_login_method rabbitmq
oslo_messaging_rabbit/rabbit_max_retries rabbitmq
oslo_messaging_rabbit/rabbit_password rabbitmq
oslo_messaging_rabbit/rabbit_port rabbitmq
oslo_messaging_rabbit/rabbit_qos_prefetch_count rabbitmq
oslo_messaging_rabbit/rabbit_retry_backoff rabbitmq
oslo_messaging_rabbit/rabbit_retry_interval rabbitmq
oslo_messaging_rabbit/rabbit_transient_queues_ttl rabbitmq
oslo_messaging_rabbit/rabbit_use_ssl rabbitmq
oslo_messaging_rabbit/rabbit_userid rabbitmq
oslo_messaging_rabbit/rabbit_virtual_host rabbitmq
oslo_messaging_rabbit/send_single_reply rabbitmq
oslo_middleware/max_request_body_size api
oslo_middleware/secure_proxy_ssl_header api
oslo_policy/policy_default_rule policy
oslo_policy/policy_dirs policy
oslo_policy/policy_file policy
oslo_versionedobjects/fatal_exception_format_errors api
rdp/enabled rdp
rdp/html5_proxy_base_url rdp
@ -721,11 +704,6 @@ spice/html5proxy_port spice
spice/keymap spice
spice/server_listen spice
spice/server_proxyclient_address spice
ssl/ca_file ca
ssl/cert_file ca
ssl/ciphers ca
ssl/key_file ca
ssl/version ca
trusted_computing/attestation_api_url trustedcomputing
trusted_computing/attestation_auth_blob trustedcomputing
trusted_computing/attestation_auth_timeout trustedcomputing
@ -770,9 +748,13 @@ vmware/wsdl_location vmware
vnc/enabled vnc
vnc/keymap vnc
vnc/novncproxy_base_url vnc
vnc/novncproxy_host vnc
vnc/novncproxy_port vnc
vnc/vncserver_listen vnc
vnc/vncserver_proxyclient_address vnc
vnc/xvpvncproxy_base_url vnc
vnc/xvpvncproxy_host vnc
vnc/xvpvncproxy_port vnc
workarounds/destroy_after_evacuate common
workarounds/disable_libvirt_livesnapshot common
workarounds/disable_rootwrap common
@ -823,7 +805,3 @@ xenserver/use_join_force xen
xenserver/vhd_coalesce_max_attempts xen
xenserver/vhd_coalesce_poll_interval xen
xenserver/vif_driver xen
zookeeper/address zookeeper
zookeeper/recv_timeout zookeeper
zookeeper/sg_prefix zookeeper
zookeeper/sg_retry_interval zookeeper

View File

@ -2,6 +2,7 @@ apiv21 API v2.1
authentication authentication
availabilityzones availability zones
barbican Barbican
cache Cache
cells cell
configdrive config drive
ephemeral_storage_encryption ephemeral storage encryption
@ -25,4 +26,3 @@ vnc VNC
volumes volumes
xen Xen
xvpvncproxy XCP VNC proxy
zookeeper Zookeeper