diff --git a/doc/src/docbkx/openstack-compute-admin/computescheduler.xml b/doc/src/docbkx/openstack-compute-admin/computescheduler.xml
index 22af9ae8b9..f95a9b54e0 100644
--- a/doc/src/docbkx/openstack-compute-admin/computescheduler.xml
+++ b/doc/src/docbkx/openstack-compute-admin/computescheduler.xml
@@ -79,28 +79,22 @@ ram_weight_multiplier=1.0
that will be used by the scheduler. The default setting
specifies all of the filter that are included with the
Compute service:
-
-scheduler_available_filters=nova.scheduler.filters.all_filters
- This
- configuration option can be specified multiple times. For
+ scheduler_available_filters=nova.scheduler.filters.all_filters
+ This configuration option can be specified multiple times. For
example, if you implemented your own custom filter in
Python called myfilter.MyFilter and you
wanted to use both the built-in filters and your custom
filter, your nova.conf file would
contain:
-
-scheduler_available_filters=nova.scheduler.filters.all_filters
-scheduler_available_filters=myfilter.MyFilter
-
+ scheduler_available_filters=nova.scheduler.filters.all_filters
+scheduler_available_filters=myfilter.MyFilter
The scheduler_default_filters
configuration option in nova.conf
defines the list of filters that will be applied by the
nova-scheduler service. As
mentioned above, the default filters are:
-
-scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
-
+ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
The available filters are described below.
@@ -201,16 +195,12 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
Configuration option in
nova.conf. The default setting
is:
-
- cpu_allocation_ratio=16.0
-
+ cpu_allocation_ratio=16.0
With this setting, if there are 8 vCPUs on a node, the
scheduler will allow instances up to 128 vCPU to be
run on that node.
To disallow vCPU overcommitment set:
-
- cpu_allocation_ratio=1.0
-
+ cpu_allocation_ratio=1.0
@@ -229,7 +219,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
With the API, use the
os:scheduler_hints key. For
example:
-
+ {
{
'server': {
'name': 'server-1',
@@ -240,8 +230,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
-}
-
+}
@@ -289,7 +278,8 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
require a host that runs an ARM-based processor and
QEMU as the hypervisor. An image can be decorated with
these properties using
- glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
+
+$ glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
The image properties that the filter checks for
are:
@@ -329,10 +319,8 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
using the isolated_hosts and
isolated_images configuration
options. For example:
-
-isolated_hosts=server1,server2
-isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
-
+ isolated_hosts=server1,server2
+isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
@@ -391,7 +379,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
1 --hint query='[">=","$free_ram_mb",1024]' server1
With the API, use the
os:scheduler_hints key:
-
+ {
{
'server': {
'name': 'server-1',
@@ -401,8 +389,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
'os:scheduler_hints': {
'query': '[">=","$free_ram_mb",1024]',
}
-}
-
+}
@@ -418,9 +405,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
configuration option in
nova.conf. The default setting
is:
-
-ram_allocation_ratio=1.5
-
+ ram_allocation_ratio=1.5
With this setting, if there is 1GB of free RAM, the
scheduler will allow instances up to size 1.5GB to be
run on that instance.
@@ -455,7 +440,7 @@ ram_allocation_ratio=1.5
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the
os:scheduler_hints key:
-
+ {
{
'server': {
'name': 'server-1',
@@ -466,8 +451,7 @@ ram_allocation_ratio=1.5
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
-}
-
+}
@@ -502,7 +486,7 @@ ram_allocation_ratio=1.5
With the API, use the
os:scheduler_hints key:
-
+ {
{
'server': {
'name': 'server-1',
@@ -513,8 +497,7 @@ ram_allocation_ratio=1.5
'build_near_host_ip': '192.168.1.1',
'cidr': '24'
}
-}
-
+}
@@ -529,10 +512,8 @@ ram_allocation_ratio=1.5
which selects the only weigher available -- the
RamWeigher. Hosts are then weighed and sorted with the
largest weight winning.
-
-scheduler_weight_classes=nova.scheduler.weights.all_weighers
-ram_weight_multiplier=1.0
-
+ scheduler_weight_classes=nova.scheduler.weights.all_weighers
+ram_weight_multiplier=1.0
The default is to spread instances across all hosts
evenly. Set the ram_weight_multiplier
option to a negative number if you prefer stacking instead