076b7a8bec
If there are multiple force hosts/nodes, retry filter will help retry on the force hosts/nodes if there are VM boot failure. Change-Id: I58c95f16e320a03d41988c860e18434412c475c3 Closes-Bug:1435156
1063 lines
55 KiB
XML
1063 lines
55 KiB
XML
<?xml version="1.0" encoding="utf-8"?>
|
|
<!DOCTYPE section [
|
|
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
|
|
%openstack;
|
|
]>
|
|
<section xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
version="5.0"
|
|
xml:id="section_compute-scheduler">
|
|
<?dbhtml stop-chunking?>
|
|
<title>Scheduling</title>
|
|
<para>Compute uses the <systemitem class="service"
|
|
>nova-scheduler</systemitem> service to determine how to
|
|
dispatch compute requests. For example, the
|
|
<systemitem class="service">nova-scheduler</systemitem>
|
|
service determines on which host a VM should launch. In the
|
|
context of filters, the term <firstterm>host</firstterm> means
|
|
a physical node that has a <systemitem class="service"
|
|
>nova-compute</systemitem> service running on it. You can
|
|
configure the scheduler through a variety of options.</para>
|
|
<para>Compute is configured with the following default scheduler
|
|
options in the <filename>/etc/nova/nova.conf</filename>
|
|
file:</para>
|
|
<programlisting language="ini">
|
|
scheduler_driver_task_period = 60
|
|
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
|
|
scheduler_available_filters = nova.scheduler.filters.all_filters
|
|
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter</programlisting>
|
|
<para>By default, the <option>scheduler_driver</option> is
|
|
configured as a filter scheduler, as described in the next
|
|
section. In the default configuration, this scheduler
|
|
considers hosts that meet all the following criteria:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>Have not been attempted for scheduling purposes
|
|
(<literal>RetryFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Are in the requested availability zone
|
|
(<literal>AvailabilityZoneFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Have sufficient RAM available
|
|
(<literal>RamFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Have sufficient disk space available for root and ephemeral storage
|
|
(<literal>DiskFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Can service the request
|
|
(<literal>ComputeFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Satisfy the extra specs associated with the instance
|
|
type
|
|
(<literal>ComputeCapabilitiesFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Satisfy any architecture, hypervisor type, or
|
|
virtual machine mode properties specified on the
|
|
instance's image properties
|
|
(<literal>ImagePropertiesFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Are on a different host than other instances of a group
|
|
(if requested)
|
|
(<literal>ServerGroupAntiAffinityFilter</literal>).</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Are in a set of group hosts (if requested)
|
|
(<literal>ServerGroupAffinityFilter</literal>).</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>The scheduler caches its list of available hosts; use the
|
|
<option>scheduler_driver_task_period</option> option to
|
|
specify how often the list is updated.</para>
|
|
<note>
|
|
<para>Do not configure <option>service_down_time</option> to
|
|
be much smaller than
|
|
<option>scheduler_driver_task_period</option>;
|
|
otherwise, hosts appear to be dead while the host list is
|
|
being cached.</para>
|
|
</note>
|
|
<para>For information about the volume scheduler, see the Block
|
|
Storage section of <link
|
|
xlink:href="http://docs.openstack.org/admin-guide-cloud/blockstorage-manage-volumes.html">
|
|
<citetitle>OpenStack Cloud Administrator
|
|
Guide</citetitle></link>.</para>
|
|
<para>The scheduler chooses a new host when an instance is
|
|
migrated.</para>
|
|
<para>When evacuating instances from a host, the scheduler service
|
|
honors the target host defined by the administrator on the evacuate
|
|
command. If a target is not defined by the administrator, the
|
|
scheduler determines the target host. For information about
|
|
instance evacuation, see <link
|
|
xlink:href="http://docs.openstack.org/admin-guide-cloud/compute-node-down.html#evacuate-instances"
|
|
>Evacuate instances</link> section of the
|
|
<citetitle>OpenStack Cloud Administrator
|
|
Guide</citetitle>.</para>
|
|
<section xml:id="filter-scheduler">
|
|
<title>Filter scheduler</title>
|
|
<para>The filter scheduler
|
|
(<literal>nova.scheduler.filter_scheduler.FilterScheduler</literal>)
|
|
is the default scheduler for scheduling virtual machine
|
|
instances. It supports filtering and weighting to make
|
|
informed decisions on where a new instance should be
|
|
created.</para>
|
|
</section>
|
|
<section xml:id="scheduler-filters">
|
|
<?dbhtml stop-chunking?>
|
|
<title>Filters</title>
|
|
<para>When the filter scheduler receives a request for a
|
|
resource, it first applies filters to determine which
|
|
hosts are eligible for consideration when dispatching a
|
|
resource. Filters are binary: either a host is accepted by
|
|
the filter, or it is rejected. Hosts that are accepted by
|
|
the filter are then processed by a different algorithm to
|
|
decide which hosts to use for that request, described in
|
|
the <link linkend="weights">Weights</link> section.</para>
|
|
<figure xml:id="filter-figure">
|
|
<title>Filtering</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata
|
|
fileref="../../common/figures/filteringWorkflow1.png"
|
|
scale="80"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>The <option>scheduler_available_filters</option>
|
|
configuration option in <filename>nova.conf</filename>
|
|
provides the Compute service with the list of the filters
|
|
that are used by the scheduler. The default setting
|
|
specifies all of the filter that are included with the
|
|
Compute service:</para>
|
|
<programlisting language="ini">scheduler_available_filters = nova.scheduler.filters.all_filters</programlisting>
|
|
<para>This configuration option can be specified multiple
|
|
times. For example, if you implemented your own custom
|
|
filter in Python called
|
|
<literal>myfilter.MyFilter</literal> and you wanted to
|
|
use both the built-in filters and your custom filter, your
|
|
<filename>nova.conf</filename> file would
|
|
contain:</para>
|
|
<programlisting language="ini">scheduler_available_filters = nova.scheduler.filters.all_filters
|
|
scheduler_available_filters = myfilter.MyFilter</programlisting>
|
|
<para>The <literal>scheduler_default_filters</literal>
|
|
configuration option in <filename>nova.conf</filename>
|
|
defines the list of filters that are applied by the
|
|
<systemitem class="service"
|
|
>nova-scheduler</systemitem> service. The default
|
|
filters are:</para>
|
|
<programlisting language="ini">scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter</programlisting>
|
|
<para>The following sections describe the available
|
|
filters.</para>
|
|
<section xml:id="aggregate-corefilter">
|
|
<title>AggregateCoreFilter</title>
|
|
<para>
|
|
Filters host by CPU core numbers with a per-aggregate
|
|
<literal>cpu_allocation_ratio</literal> value. If the
|
|
per-aggregate value is not found, the value falls back
|
|
to the global setting. If the host is in more than one
|
|
aggregate and more than one value is found, the minimum
|
|
value will be used. For information about how to use
|
|
this filter, see <xref linkend="host-aggregates"/>. See
|
|
also <xref linkend="corefilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="aggregate-diskfilter">
|
|
<title>AggregateDiskFilter</title>
|
|
<para>
|
|
Filters host by disk allocation with a per-aggregate
|
|
<literal>disk_allocation_ratio</literal> value. If the
|
|
per-aggregate value is not found, the value falls back to
|
|
the global setting. If the host is in more than one
|
|
aggregate and more than one value is found, the minimum
|
|
value will be used. For information about how to use this
|
|
filter, see <xref linkend="host-aggregates"/>. See also
|
|
<xref linkend="diskfilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="aggregate-imagepropertiesisolationfilter">
|
|
<title>AggregateImagePropertiesIsolation</title>
|
|
<para>Matches properties defined in an image's metadata
|
|
against those of aggregates to determine host
|
|
matches:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>If a host belongs to an aggregate and the
|
|
aggregate defines one or more metadata that
|
|
matches an image's properties, that host is a
|
|
candidate to boot the image's instance.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>If a host does not belong to any aggregate,
|
|
it can boot instances from all images.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>For example, the following aggregate
|
|
<systemitem>myWinAgg</systemitem> has the Windows
|
|
operating system as metadata (named 'windows'):</para>
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-details MyWinAgg</userinput>
|
|
<computeroutput>+----+----------+-------------------+------------+---------------+
|
|
| Id | Name | Availability Zone | Hosts | Metadata |
|
|
+----+----------+-------------------+------------+---------------+
|
|
| 1 | MyWinAgg | None | 'sf-devel' | 'os=windows' |
|
|
+----+----------+-------------------+------------+---------------+</computeroutput></screen>
|
|
<para>In this example, because the following Win-2012
|
|
image has the <property>windows</property> property,
|
|
it boots on the <systemitem>sf-devel</systemitem> host
|
|
(all other filters being equal):</para>
|
|
<screen><prompt>$</prompt> <userinput>glance image-show Win-2012</userinput>
|
|
<computeroutput>+------------------+--------------------------------------+
|
|
| Property | Value |
|
|
+------------------+--------------------------------------+
|
|
| Property 'os' | windows |
|
|
| checksum | f8a2eeee2dc65b3d9b6e63678955bd83 |
|
|
| container_format | ami |
|
|
| created_at | 2013-11-14T13:24:25 |
|
|
| ...</computeroutput></screen>
|
|
<para>You can configure the
|
|
<systemitem>AggregateImagePropertiesIsolation</systemitem>
|
|
filter by using the following options in the
|
|
<filename>nova.conf</filename> file:</para>
|
|
<programlisting language="ini"># Considers only keys matching the given
|
|
namespace (string). Multiple values can be given, as a comma-separated list.
|
|
aggregate_image_properties_isolation_namespace = <None>
|
|
|
|
# Separator used between the namespace and keys (string).
|
|
aggregate_image_properties_isolation_separator = .</programlisting>
|
|
</section>
|
|
<section xml:id="aggregate-instanceextraspecsfilter">
|
|
<title>AggregateInstanceExtraSpecsFilter</title>
|
|
<para>Matches properties defined in extra specs for an
|
|
instance type against admin-defined properties on a host
|
|
aggregate. Works with specifications that are scoped with
|
|
<literal>aggregate_instance_extra_specs</literal>. Multiple
|
|
values can be given, as a comma-separated list. For
|
|
backward compatibility, also works with non-scoped
|
|
specifications; this action is highly discouraged because
|
|
it conflicts with <link
|
|
linkend="computecapabilitiesfilter">
|
|
ComputeCapabilitiesFilter</link> filter when you enable
|
|
both filters. For information about how to use this
|
|
filter, see the <link linkend="host-aggregates">host
|
|
aggregates</link> section.</para>
|
|
</section>
|
|
<section xml:id="aggregate-ioopsfilter">
|
|
<title>AggregateIoOpsFilter</title>
|
|
<para>
|
|
Filters host by disk allocation with a per-aggregate
|
|
<literal>max_io_ops_per_host</literal> value. If the
|
|
per-aggregate value is not found, the value falls back to
|
|
the global setting. If the host is in more than one
|
|
aggregate and more than one value is found, the minimum
|
|
value will be used. For information about how to use this
|
|
filter, see <xref linkend="host-aggregates"/>. See
|
|
also <xref linkend="ioopsfilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="aggregate-multi-tenancy-isolation">
|
|
<title>AggregateMultiTenancyIsolation</title>
|
|
<para>Isolates tenants to specific <link
|
|
linkend="host-aggregates">host aggregates</link>.
|
|
If a host is in an aggregate that has the
|
|
<literal>filter_tenant_id</literal> metadata key,
|
|
the host creates instances from only that tenant or
|
|
list of tenants. A host can be in different
|
|
aggregates. If a host does not belong to an aggregate
|
|
with the metadata key, the host can create instances
|
|
from all tenants.</para>
|
|
</section>
|
|
<section xml:id="aggregate-numinstances-filter">
|
|
<title>AggregateNumInstancesFilter</title>
|
|
<para>
|
|
Filters host by number of instances with a per-aggregate
|
|
<literal>max_instances_per_host</literal> value. If the
|
|
per-aggregate value is not found, the value falls back to
|
|
the global setting. If the host is in more than one
|
|
aggregate and thus more than one value is found, the
|
|
minimum value will be used. For information about how to
|
|
use this filter, see <xref
|
|
linkend="host-aggregates"/>. See also <xref
|
|
linkend="numinstancesfilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="aggregate-ram-filter">
|
|
<title>AggregateRamFilter</title>
|
|
<para>
|
|
Filters host by RAM allocation of instances with a per-aggregate
|
|
<literal>ram_allocation_ratio</literal> value. If the
|
|
per-aggregate value is not found, the value falls back to
|
|
the global setting. If the host is in more than one
|
|
aggregate and thus more than one value is found, the
|
|
minimum value will be used. For information about how to
|
|
use this filter, see <xref
|
|
linkend="host-aggregates"/>. See also <xref
|
|
linkend="ramfilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="aggregate-typeaffinityfilter">
|
|
<title>AggregateTypeAffinityFilter</title>
|
|
<para>
|
|
This filter passes hosts if no <literal>instance_type</literal>
|
|
key is set or the <literal>instance_type</literal>
|
|
aggregate metadata value contains the name of the
|
|
<literal>instance_type</literal> requested.
|
|
The value of the <literal>instance_type</literal> metadata entry
|
|
is a string that may contain either a single
|
|
<literal>instance_type</literal> name or a comma-separated list
|
|
of <literal>instance_type</literal> names, such as
|
|
'm1.nano' or "m1.nano,m1.small."
|
|
For information about how to use this filter, see <xref
|
|
linkend="host-aggregates"/>. See also <xref
|
|
linkend="typeaffinityfilter"/>.
|
|
</para>
|
|
</section>
|
|
<section xml:id="allhostsfilter">
|
|
<title>AllHostsFilter</title>
|
|
<para>This is a no-op filter. It does not eliminate any of
|
|
the available hosts.</para>
|
|
</section>
|
|
<section xml:id="availabilityzonefilter">
|
|
<title>AvailabilityZoneFilter</title>
|
|
<para>Filters hosts by availability zone. You must enable
|
|
this filter for the scheduler to respect availability
|
|
zones in requests.</para>
|
|
</section>
|
|
<section xml:id="computecapabilitiesfilter">
|
|
<title>ComputeCapabilitiesFilter</title>
|
|
<para>Matches properties defined in extra specs for an
|
|
instance type against compute capabilities.</para>
|
|
<para>If an extra specs key contains a colon
|
|
(<literal>:</literal>), anything before the colon
|
|
is treated as a namespace and anything after the colon
|
|
is treated as the key to be matched. If a namespace is
|
|
present and is not <literal>capabilities</literal>,
|
|
the filter ignores the namespace. For backward
|
|
compatibility, also treats the extra specs key as the
|
|
key to be matched if no namespace is present; this
|
|
action is highly discouraged because it conflicts with
|
|
<link linkend="aggregate-instanceextraspecsfilter">
|
|
AggregateInstanceExtraSpecsFilter</link> filter
|
|
when you enable both filters.</para>
|
|
</section>
|
|
<section xml:id="computefilter">
|
|
<title>ComputeFilter</title>
|
|
<para>Passes all hosts that are operational and
|
|
enabled.</para>
|
|
<para>In general, you should always enable this filter.</para>
|
|
</section>
|
|
<section xml:id="corefilter">
|
|
<title>CoreFilter</title>
|
|
<para>Only schedules instances on hosts if sufficient CPU
|
|
cores are available. If this filter is not set, the
|
|
scheduler might over-provision a host based on cores.
|
|
For example, the virtual cores running on an instance
|
|
may exceed the physical cores.</para>
|
|
<para>You can configure this filter to enable a fixed
|
|
amount of vCPU overcommitment by using the
|
|
<option>cpu_allocation_ratio</option>
|
|
configuration option in
|
|
<filename>nova.conf</filename>. The default setting
|
|
is:</para>
|
|
<programlisting language="ini">cpu_allocation_ratio = 16.0</programlisting>
|
|
<para>With this setting, if 8 vCPUs are on a node, the
|
|
scheduler allows instances up to 128 vCPU to be run on
|
|
that node.</para>
|
|
<para>To disallow vCPU overcommitment set:</para>
|
|
<programlisting language="ini">cpu_allocation_ratio = 1.0</programlisting>
|
|
<note>
|
|
<para>The Compute API always returns the actual
|
|
number of CPU cores available on a compute node
|
|
regardless of the value of the
|
|
<option>cpu_allocation_ratio</option>
|
|
configuration key. As a result changes to the
|
|
<option>cpu_allocation_ratio</option> are not
|
|
reflected via the command line clients or the
|
|
dashboard. Changes to this configuration key are
|
|
only taken into account internally in the
|
|
scheduler.</para>
|
|
</note>
|
|
</section>
|
|
<section xml:id="numatopologyfilter">
|
|
<title>NUMATopologyFilter</title>
|
|
<para>Filters hosts based on the NUMA topology that was specified
|
|
for the instance through the use of flavor <literal>extra_specs</literal>in
|
|
combination with the image properties, as described in detail
|
|
in the related nova-spec document: <link
|
|
xlink:href="http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html">
|
|
</link>
|
|
Filter will try to match
|
|
the exact NUMA cells of the instance to those of the host.
|
|
It will consider the standard over-subscription limits each cell,
|
|
and provide limits to the compute host accordingly.
|
|
</para>
|
|
<note>
|
|
<para>If instance has no topology defined, it will
|
|
be considered for any host. If instance has a topology
|
|
defined, it will be considered only for NUMA capable
|
|
hosts.
|
|
</para>
|
|
</note>
|
|
</section>
|
|
<section xml:id="differenthostfilter">
|
|
<title>DifferentHostFilter</title>
|
|
<para>Schedules the instance on a different host from a
|
|
set of instances. To take advantage of this filter,
|
|
the requester must pass a scheduler hint, using
|
|
<literal>different_host</literal> as the key and a
|
|
list of instance UUIDs as the value. This filter is
|
|
the opposite of the <literal>SameHostFilter</literal>.
|
|
Using the <command>nova</command> command-line tool,
|
|
use the <literal>--hint</literal> flag. For
|
|
example:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
|
|
--hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
|
|
--hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
|
<para>With the API, use the
|
|
<literal>os:scheduler_hints</literal> key. For
|
|
example:</para>
|
|
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints.json" parse="text"/></programlisting>
|
|
</section>
|
|
<section xml:id="diskfilter">
|
|
<title>DiskFilter</title>
|
|
<para>Only schedules instances on hosts if there is
|
|
sufficient disk space available for root and ephemeral
|
|
storage.</para>
|
|
<para>You can configure this filter to enable a fixed
|
|
amount of disk overcommitment by using the
|
|
<literal>disk_allocation_ratio</literal>
|
|
configuration option in the
|
|
<filename>nova.conf</filename> configuration file.
|
|
The default setting disables the possibility of the
|
|
overcommitment and allows launching a VM only if
|
|
there is a sufficient amount of disk space available
|
|
on a host:</para>
|
|
<programlisting language="ini">disk_allocation_ratio = 1.0</programlisting>
|
|
<para>DiskFilter always considers the value of the
|
|
<option>disk_available_least</option> property and not the
|
|
one of the <option>free_disk_gb</option> property
|
|
of a hypervisor's statistics:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova hypervisor-stats</userinput>
|
|
<computeroutput>+----------------------+-------+
|
|
| Property | Value |
|
|
+----------------------+-------+
|
|
| count | 1 |
|
|
| current_workload | 0 |
|
|
| disk_available_least | 29 |
|
|
| free_disk_gb | 35 |
|
|
| free_ram_mb | 3441 |
|
|
| local_gb | 35 |
|
|
| local_gb_used | 0 |
|
|
| memory_mb | 3953 |
|
|
| memory_mb_used | 512 |
|
|
| running_vms | 0 |
|
|
| vcpus | 2 |
|
|
| vcpus_used | 0 |
|
|
+----------------------+-------+</computeroutput></screen>
|
|
<para>As it can be viewed from the command output above,
|
|
the amount of the available disk space
|
|
can be less than the amount of the free disk space.
|
|
It happens because the <option>disk_available_least</option>
|
|
property accounts for the virtual size rather
|
|
than the actual size of images. If you use an image format that
|
|
is sparse or copy on write so that each virtual instance
|
|
does not require a 1:1 allocation of a virtual disk
|
|
to a physical storage, it may be useful to allow the
|
|
overcommitment of disk space.</para>
|
|
<para>To enable scheduling instances while overcommitting disk
|
|
resources on the node, adjust the value of
|
|
the <literal>disk_allocation_ratio</literal>
|
|
configuration option to greater than <literal>1.0</literal>:</para>
|
|
<programlisting language="ini">disk_allocation_ratio > 1.0</programlisting>
|
|
<note>
|
|
<para>If the value is set to <literal>>1</literal>, we recommend
|
|
keeping track of the free disk space, as the value approaching
|
|
<literal>0</literal> may result in the incorrect functioning
|
|
of instances using it at the moment.</para>
|
|
</note>
|
|
</section>
|
|
<section xml:id="groupaffinityfilter">
|
|
<title>GroupAffinityFilter</title>
|
|
<note>
|
|
<para>This filter is deprecated in favor of <link
|
|
linkend="servergroupantiaffinityfilter"
|
|
>ServerGroupAffinityFilter</link>.</para>
|
|
</note>
|
|
<para>The GroupAffinityFilter ensures that an instance is
|
|
scheduled on to a host from a set of group hosts. To
|
|
take advantage of this filter, the requester must pass
|
|
a scheduler hint, using <literal>group</literal> as
|
|
the key and an arbitrary name as the value. Using the
|
|
<command>nova</command> command-line tool, use the
|
|
<literal>--hint</literal> flag. For
|
|
example:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=foo server-1</userinput></screen>
|
|
<para>This filter should not be enabled at the same time
|
|
as <link linkend="groupantiaffinityfilter"
|
|
>GroupAntiAffinityFilter</link> or neither filter
|
|
will work properly.</para>
|
|
</section>
|
|
<section xml:id="groupantiaffinityfilter">
|
|
<title>GroupAntiAffinityFilter</title>
|
|
<note>
|
|
<para>This filter is deprecated in favor of <link
|
|
linkend="servergroupantiaffinityfilter"
|
|
>ServerGroupAntiAffinityFilter</link>.</para>
|
|
</note>
|
|
<para>The GroupAntiAffinityFilter ensures that each
|
|
instance in a group is on a different host. To take
|
|
advantage of this filter, the requester must pass a
|
|
scheduler hint, using <literal>group</literal> as the
|
|
key and an arbitrary name as the value. Using the
|
|
<command>nova</command> command-line tool, use the
|
|
<literal>--hint</literal> flag. For
|
|
example:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=foo server-1</userinput></screen>
|
|
<para>This filter should not be enabled at the same time
|
|
as <link linkend="groupaffinityfilter"
|
|
>GroupAffinityFilter</link> or neither filter will
|
|
work properly.</para>
|
|
</section>
|
|
<section xml:id="imagepropertiesfilter">
|
|
<title>ImagePropertiesFilter</title>
|
|
<para>Filters hosts based on properties defined on the
|
|
instance's image. It passes hosts that can support the
|
|
specified image properties contained in the instance.
|
|
Properties include the architecture, hypervisor type,
|
|
hypervisor version (for Xen hypervisor type only),
|
|
and virtual machine mode.</para>
|
|
<para>For example, an instance
|
|
might require a host that runs an ARM-based processor,
|
|
and QEMU as the hypervisor. You can decorate an image
|
|
with these properties by using:</para>
|
|
<screen><prompt>$</prompt> <userinput>glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
|
|
<para>The image properties that the filter checks for are:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><literal>architecture</literal>:
|
|
describes the machine
|
|
architecture required by the image. Examples
|
|
are <literal>i686</literal>, <literal>x86_64</literal>,
|
|
<literal>arm</literal>, and <literal>ppc64</literal>.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><literal>hypervisor_type</literal>:
|
|
describes the hypervisor required by the image.
|
|
Examples are <literal>xen</literal>,
|
|
<literal>qemu</literal>, and <literal>xenapi</literal>.</para>
|
|
<note><para><literal>qemu</literal> is used for both
|
|
QEMU and KVM hypervisor types.</para></note>
|
|
</listitem>
|
|
<listitem>
|
|
<para><literal>hypervisor_version_requires</literal>:
|
|
describes the hypervisor version required by the image.
|
|
The property is supported for Xen hypervisor type only.
|
|
It can be used to enable support
|
|
for multiple hypervisor versions,
|
|
and to prevent instances with newer
|
|
Xen tools from being provisioned on an older version
|
|
of a hypervisor. If available, the property value
|
|
is compared to the hypervisor version of the compute host.</para>
|
|
<para>To filter the hosts by the hypervisor version,
|
|
add the <literal>hypervisor_version_requires</literal> property
|
|
on the image as metadata and pass an operator and a
|
|
required hypervisor version as its value:</para>
|
|
<screen><prompt>$</prompt> <userinput>glance image-update img-uuid --property hypervisor_type=xen --property hypervisor_version_requires=">=4.3"</userinput></screen>
|
|
</listitem>
|
|
<listitem>
|
|
<para><literal>vm_mode</literal>: describes the hypervisor application
|
|
binary interface (ABI) required by the image.
|
|
Examples are <literal>xen</literal> for Xen 3.0 paravirtual
|
|
ABI, <literal>hvm</literal> for native ABI,
|
|
<literal>uml</literal> for User Mode
|
|
Linux paravirtual ABI, <literal>exe</literal>
|
|
for container virt executable ABI.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</section>
|
|
<section xml:id="isolatedhostsfilter">
|
|
<title>IsolatedHostsFilter</title>
|
|
<para>Allows the admin to define a special (isolated) set
|
|
of images and a special (isolated) set of hosts, such
|
|
that the isolated images can only run on the isolated
|
|
hosts, and the isolated hosts can only run isolated
|
|
images. The flag
|
|
<literal>restrict_isolated_hosts_to_isolated_images</literal>
|
|
can be used to force isolated hosts to only run
|
|
isolated images.</para>
|
|
<para>The admin must specify the isolated set of images
|
|
and hosts in the <filename>nova.conf</filename> file
|
|
using the <literal>isolated_hosts</literal> and
|
|
<literal>isolated_images</literal> configuration
|
|
options. For example:</para>
|
|
<programlisting language="ini">isolated_hosts = server1, server2
|
|
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
|
|
</section>
|
|
<section xml:id="ioopsfilter">
|
|
<title>IoOpsFilter</title>
|
|
<para>
|
|
The IoOpsFilter filters hosts by concurrent I/O operations
|
|
on it. Hosts with too many concurrent I/O operations will
|
|
be filtered out. The <option>max_io_ops_per_host</option>
|
|
option specifies the maximum number of I/O intensive
|
|
instances allowed to run on a host. A host will be ignored
|
|
by the scheduler if more than
|
|
<option>max_io_ops_per_host</option> instances in build,
|
|
resize, snapshot, migrate, rescue or unshelve task states
|
|
are running on it.
|
|
</para>
|
|
</section>
|
|
<section xml:id="jsonfilter">
|
|
<title>JsonFilter</title>
|
|
<para>The JsonFilter allows a user to construct a custom
|
|
filter by passing a scheduler hint in JSON format. The
|
|
following operators are supported:<itemizedlist>
|
|
<listitem>
|
|
<para>=</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>in</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><=</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>>=</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>not</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>or</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>and</para>
|
|
</listitem>
|
|
</itemizedlist>The filter supports the following variables:<itemizedlist>
|
|
<listitem>
|
|
<para><code>$free_ram_mb</code></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><code>$free_disk_mb</code></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><code>$total_usable_ram_mb</code></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><code>$vcpus_total</code></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><code>$vcpus_used</code></para>
|
|
</listitem>
|
|
</itemizedlist>Using the <command>nova</command>
|
|
command-line tool, use the <literal>--hint</literal>
|
|
flag:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \
|
|
--flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1</userinput></screen>
|
|
<para>With the API, use the
|
|
<literal>os:scheduler_hints</literal> key:</para>
|
|
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints2.json" parse="text"/></programlisting>
|
|
</section>
|
|
<section xml:id="metricsfilter">
|
|
<title>MetricsFilter</title>
|
|
<para>Filters hosts based on meters
|
|
<literal>weight_setting</literal>. Only hosts with the
|
|
available meters are passed so that the metrics weigher
|
|
will not fail due to these hosts.
|
|
</para>
|
|
</section>
|
|
<section xml:id="numinstancesfilter">
|
|
<title>NumInstancesFilter</title>
|
|
<para>
|
|
Hosts that have more instances running than specified by
|
|
the <option>max_instances_per_host</option> option are
|
|
filtered out when this filter is in place.
|
|
</para>
|
|
</section>
|
|
<section xml:id="pcipassthroughfilter">
|
|
<title>PciPassthroughFilter</title>
|
|
<para>
|
|
The filter schedules instances on a host if the host has
|
|
devices that meet the device requests in the
|
|
<literal>extra_specs</literal> attribute for the flavor.
|
|
</para>
|
|
</section>
|
|
<section xml:id="ramfilter">
|
|
<title>RamFilter</title>
|
|
<para>Only schedules instances on hosts that have
|
|
sufficient RAM available. If this filter is not set,
|
|
the scheduler may over provision a host based on RAM
|
|
(for example, the RAM allocated by virtual machine
|
|
instances may exceed the physical RAM).</para>
|
|
<para>You can configure this filter to enable a fixed
|
|
amount of RAM overcommitment by using the
|
|
<literal>ram_allocation_ratio</literal>
|
|
configuration option in
|
|
<filename>nova.conf</filename>. The default setting
|
|
is:</para>
|
|
<programlisting language="ini">ram_allocation_ratio = 1.5</programlisting>
|
|
<para>This setting enables 1.5 GB instances to run on
|
|
any compute node with 1 GB of free RAM.</para>
|
|
</section>
|
|
<section xml:id="retryfilter">
|
|
<title>RetryFilter</title>
|
|
<para>Filters out hosts that have already been attempted
|
|
for scheduling purposes. If the scheduler selects a
|
|
host to respond to a service request, and the host
|
|
fails to respond to the request, this filter prevents
|
|
the scheduler from retrying that host for the service
|
|
request.</para>
|
|
<para>This filter is only useful if the
|
|
<literal>scheduler_max_attempts</literal>
|
|
configuration option is set to a value greater than
|
|
zero.</para>
|
|
<para>If there are multiple force hosts/nodes, this filter
|
|
helps to retry on the force hosts/nodes if a VM fails
|
|
to boot.</para>
|
|
</section>
|
|
<section xml:id="samehostfilter">
|
|
<title>SameHostFilter</title>
|
|
<para>Schedules the instance on the same host as another
|
|
instance in a set of instances. To take advantage of
|
|
this filter, the requester must pass a scheduler hint,
|
|
using <literal>same_host</literal> as the key and a
|
|
list of instance UUIDs as the value. This filter is
|
|
the opposite of the
|
|
<literal>DifferentHostFilter</literal>. Using the
|
|
<command>nova</command> command-line tool, use the
|
|
<literal>--hint</literal> flag:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
|
|
--hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
|
|
--hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
|
<para>With the API, use the
|
|
<literal>os:scheduler_hints</literal> key:</para>
|
|
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints3.json" parse="text"/></programlisting>
|
|
</section>
|
|
<section xml:id="servergroupaffinityfilter">
|
|
<title>ServerGroupAffinityFilter</title>
|
|
<para>The ServerGroupAffinityFilter ensures that an
|
|
instance is scheduled on to a host from a set of group
|
|
hosts. To take advantage of this filter, the requester
|
|
must create a server group with an
|
|
<literal>affinity</literal> policy, and pass a
|
|
scheduler hint, using <literal>group</literal> as the
|
|
key and the server group UUID as the value. Using the
|
|
<command>nova</command> command-line tool, use the
|
|
<literal>--hint</literal> flag. For
|
|
example:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova server-group-create --policy affinity group-1</userinput>
|
|
<prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=<replaceable>SERVER_GROUP_UUID</replaceable> server-1</userinput></screen>
|
|
</section>
|
|
<section xml:id="servergroupantiaffinityfilter">
|
|
<title>ServerGroupAntiAffinityFilter</title>
|
|
<para>The ServerGroupAntiAffinityFilter ensures that each
|
|
instance in a group is on a different host. To take
|
|
advantage of this filter, the requester must create a
|
|
server group with an <literal>anti-affinity</literal>
|
|
policy, and pass a scheduler hint, using
|
|
<literal>group</literal> as the key and the server
|
|
group UUID as the value. Using the
|
|
<command>nova</command> command-line tool, use the
|
|
<literal>--hint</literal> flag. For
|
|
example:</para>
|
|
<screen><prompt>$</prompt> <userinput>nova server-group-create --policy anti-affinity group-1</userinput>
|
|
<prompt>$</prompt> <userinput>nova boot --image <replaceable>IMAGE_ID</replaceable> --flavor 1 --hint group=<replaceable>SERVER_GROUP_UUID</replaceable> server-1</userinput></screen>
|
|
</section>
|
|
<section xml:id="simplecidraffinityfilter">
|
|
<title>SimpleCIDRAffinityFilter</title>
|
|
<para>Schedules the instance based on host IP subnet
|
|
range. To take advantage of this filter, the requester
|
|
must specify a range of valid IP address in CIDR
|
|
format, by passing two scheduler hints:</para>
|
|
<variablelist>
|
|
<varlistentry>
|
|
<term><literal>build_near_host_ip</literal></term>
|
|
<listitem>
|
|
<para>The first IP address in the subnet (for
|
|
example,
|
|
<literal>192.168.1.1</literal>)</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
<varlistentry>
|
|
<term><literal>cidr</literal></term>
|
|
<listitem>
|
|
<para>The CIDR that corresponds to the subnet
|
|
(for example,
|
|
<literal>/24</literal>)</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
</variablelist>
|
|
<para>Using the <command>nova</command> command-line tool,
|
|
use the <literal>--hint</literal> flag. For example,
|
|
to specify the IP subnet
|
|
<literal>192.168.1.1/24</literal></para>
|
|
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
|
|
--hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1</userinput></screen>
|
|
<para>With the API, use the
|
|
<literal>os:scheduler_hints</literal> key:</para>
|
|
<programlisting language="json"><xi:include href="../../common/samples/server-scheduler-hints4.json" parse="text"/></programlisting>
|
|
</section>
|
|
<section xml:id="trustedfilter">
|
|
<title>TrustedFilter</title>
|
|
<para>
|
|
Filters hosts based on their trust. Only passes hosts that
|
|
meet the trust requirements specified in the instance
|
|
properties.
|
|
</para>
|
|
</section>
|
|
<section xml:id="typeaffinityfilter">
|
|
<title>TypeAffinityFilter</title>
|
|
<para>
|
|
Dynamically limits hosts to one instance type. An instance
|
|
can only be launched on a host, if no instance with
|
|
different instances types are running on it, or if the
|
|
host has no running instances at all.
|
|
</para>
|
|
</section>
|
|
</section>
|
|
<section xml:id="weights">
|
|
<title>Weights</title>
|
|
<?dbhtml stop-chunking?>
|
|
<para>When resourcing instances, the filter scheduler filters
|
|
and weights each host in the list of acceptable hosts. Each
|
|
time the scheduler selects a host, it virtually consumes
|
|
resources on it, and subsequent selections are adjusted
|
|
accordingly. This process is useful when the customer asks
|
|
for the same large amount of instances, because weight is
|
|
computed for each requested instance.</para>
|
|
<para>All weights are normalized before being summed up; the
|
|
host with the largest weight is given the highest
|
|
priority.</para>
|
|
<figure xml:id="figure_weighting-hosts">
|
|
<title>Weighting hosts</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata
|
|
fileref="../../common/figures/nova-weighting-hosts.png"
|
|
/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>If cells are used, cells are weighted by the scheduler
|
|
in the same manner as hosts.</para>
|
|
<para>Hosts and cells are weighted based on the following
|
|
options in the <filename>/etc/nova/nova.conf</filename>
|
|
file:</para>
|
|
<table rules="all" xml:id="table_host-weighting-options">
|
|
<caption>Host weighting options</caption>
|
|
<col width="10%" title="Section"/>
|
|
<col width="25%" title="Option"/>
|
|
<col width="60%" title="Description"/>
|
|
<thead>
|
|
<tr>
|
|
<th>Section</th>
|
|
<th>Option</th>
|
|
<th>Description</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody>
|
|
<tr valign="top">
|
|
<td>[DEFAULT]</td>
|
|
<td><literal>ram_weight_multiplier</literal></td>
|
|
<td>By default, the scheduler spreads instances
|
|
across all hosts evenly. Set the
|
|
<option>ram_weight_multiplier</option>
|
|
option to a negative number if you prefer
|
|
stacking instead of spreading. Use a
|
|
floating-point value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[DEFAULT]</td>
|
|
<td><literal>scheduler_host_subset_size</literal></td>
|
|
<td>New instances are scheduled on a host that is
|
|
chosen randomly from a subset of the N best
|
|
hosts. This property defines the subset size
|
|
from which a host is chosen. A value of 1
|
|
chooses the first host returned by the
|
|
weighting functions. This value must be at
|
|
least 1. A value less than 1 is ignored, and 1
|
|
is used instead. Use an integer value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[DEFAULT]</td>
|
|
<td><literal>scheduler_weight_classes</literal></td>
|
|
<td>Defaults to
|
|
<literal>nova.scheduler.weights.all_weighers</literal>,
|
|
which selects the
|
|
RamWeigher and MetricsWeigher. Hosts are then weighted and sorted
|
|
with the largest weight winning.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[DEFAULT]</td>
|
|
<td><literal>io_ops_weight_multiplier</literal></td>
|
|
<td>Multiplier used for weighing host I/O operations. A negative
|
|
value means a preference to choose light workload
|
|
compute hosts.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[metrics]</td>
|
|
<td><literal>weight_multiplier</literal></td>
|
|
<td>Multiplier for weighting meters. Use a
|
|
floating-point value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[metrics]</td>
|
|
<td><literal>weight_setting</literal></td>
|
|
<td>Determines how meters are weighted. Use a
|
|
comma-separated list of metricName=ratio. For
|
|
example: "name1=1.0, name2=-1.0" results in:
|
|
<literal>name1.value * 1.0 + name2.value *
|
|
-1.0</literal>
|
|
</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[metrics]</td>
|
|
<td><literal>required</literal></td>
|
|
<td><para>Specifies how to treat unavailable meters:<itemizedlist>
|
|
<listitem>
|
|
<para>True—Raises an
|
|
exception. To avoid the raised
|
|
exception, you should use the
|
|
scheduler filter
|
|
<literal>MetricFilter</literal> to
|
|
filter out hosts with unavailable
|
|
meters.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>False—Treated as a
|
|
negative factor in the weighting
|
|
process (uses the
|
|
<option>weight_of_unavailable</option>
|
|
option).</para>
|
|
</listitem>
|
|
</itemizedlist></para></td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[metrics]</td>
|
|
<td><literal>weight_of_unavailable</literal></td>
|
|
<td>If <option>required</option> is set to False,
|
|
and any one of the meters set by
|
|
<option>weight_setting</option> is
|
|
unavailable, the
|
|
<option>weight_of_unavailable</option>
|
|
value is returned to the scheduler.</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
<para>For example:</para>
|
|
<programlisting language="ini">[DEFAULT]
|
|
scheduler_host_subset_size = 1
|
|
scheduler_weight_classes = nova.scheduler.weights.all_weighers
|
|
ram_weight_multiplier = 1.0
|
|
io_ops_weight_multiplier = 2.0
|
|
[metrics]
|
|
weight_multiplier = 1.0
|
|
weight_setting = name1=1.0, name2=-1.0
|
|
required = false
|
|
weight_of_unavailable = -10000.0</programlisting>
|
|
<table rules="all" xml:id="table_cell-weighting-options">
|
|
<caption>Cell weighting options</caption>
|
|
<col width="10%" title="Section"/>
|
|
<col width="25%" title="Option"/>
|
|
<col width="60%" title="Description"/>
|
|
<thead>
|
|
<tr>
|
|
<th>Section</th>
|
|
<th>Option</th>
|
|
<th>Description</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody>
|
|
<tr valign="top">
|
|
<td>[cells]</td>
|
|
<td><literal>mute_weight_multiplier</literal></td>
|
|
<td>Multiplier to weight mute children (hosts which
|
|
have not sent capacity or capacity updates for
|
|
some time). Use a negative, floating-point
|
|
value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[cells]</td>
|
|
<td><literal>offset_weight_multiplier</literal></td>
|
|
<td>Multiplier to weight cells, so you can specify
|
|
a preferred cell. Use a floating point
|
|
value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[cells]</td>
|
|
<td><literal>ram_weight_multiplier</literal></td>
|
|
<td>By default, the scheduler spreads instances
|
|
across all cells evenly. Set the
|
|
<option>ram_weight_multiplier</option>
|
|
option to a negative number if you prefer
|
|
stacking instead of spreading. Use a
|
|
floating-point value.</td>
|
|
</tr>
|
|
<tr valign="top">
|
|
<td>[cells]</td>
|
|
<td><literal>scheduler_weight_classes</literal></td>
|
|
<td>Defaults to
|
|
<literal>nova.cells.weights.all_weighers</literal>,
|
|
which maps to all cell weighers included with
|
|
Compute. Cells are then weighted and sorted
|
|
with the largest weight winning.</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
<para>For example:</para>
|
|
<programlisting language="ini">[cells]
|
|
scheduler_weight_classes = nova.cells.weights.all_weighers
|
|
mute_weight_multiplier = -10.0
|
|
ram_weight_multiplier = 1.0
|
|
offset_weight_multiplier = 1.0</programlisting>
|
|
</section>
|
|
<section xml:id="chance-scheduler">
|
|
<title>Chance scheduler</title>
|
|
<?dbhtml stop-chunking?>
|
|
<para>As an administrator, you work with the filter scheduler.
|
|
However, the Compute service also uses the Chance
|
|
Scheduler,
|
|
<literal>nova.scheduler.chance.ChanceScheduler</literal>,
|
|
which randomly selects from lists of filtered
|
|
hosts.</para>
|
|
</section>
|
|
<section xml:id="utilization-aware-scheduling">
|
|
<title>Utilization aware scheduling</title>
|
|
<?dbhtml stop-chunking?>
|
|
<para>It is possible to schedule VMs using advanced scheduling
|
|
decisions. These decisions are made based on enhanced
|
|
usage statistics encompassing data like memory cache
|
|
utilization, memory bandwidth utilization, or network
|
|
bandwidth utilization. This is disabled by default.
|
|
The administrator can configure how the metrics are
|
|
weighted in the configuration file by using the <literal>
|
|
weight_setting</literal> configuration option in
|
|
the <filename>nova.conf</filename> configuration file.
|
|
For example to configure metric1 with ratio1 and metric2
|
|
with ratio2:</para>
|
|
<programlisting language="ini">
|
|
weight_setting = "metric1=ratio1, metric2=ratio2"
|
|
</programlisting>
|
|
</section>
|
|
<xi:include
|
|
href="../../common/section_cli_nova_host_aggregates.xml"/>
|
|
<section xml:id="compute-scheduler-config-ref">
|
|
<title>Configuration reference</title>
|
|
<para>To customize the Compute scheduler, use the
|
|
configuration option settings documented in <xref
|
|
linkend="config_table_nova_scheduler"/>.</para>
|
|
</section>
|
|
</section>
|