2015-03-28 17:16:24 -05:00
|
|
|
---
|
2018-09-17 16:01:52 +01:00
|
|
|
- import_playbook: gather-facts.yml
|
2016-11-15 17:14:35 +00:00
|
|
|
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
# NOTE(mgoddard): In large environments, even tasks that are skipped can take a
|
|
|
|
# significant amount of time. This is an optimisation to prevent any tasks
|
|
|
|
# running in the subsequent plays for services that are disabled.
|
|
|
|
- name: Group hosts based on configuration
|
|
|
|
hosts: all
|
|
|
|
gather_facts: false
|
|
|
|
tasks:
|
|
|
|
- name: Group hosts based on Kolla action
|
|
|
|
group_by:
|
|
|
|
key: "kolla_action_{{ kolla_action }}"
|
|
|
|
|
|
|
|
- name: Group hosts based on enabled services
|
|
|
|
group_by:
|
|
|
|
key: "{{ item }}"
|
|
|
|
with_items:
|
|
|
|
- enable_aodh_{{ enable_aodh | bool }}
|
|
|
|
- enable_barbican_{{ enable_barbican | bool }}
|
|
|
|
- enable_blazar_{{ enable_blazar | bool }}
|
|
|
|
- enable_ceilometer_{{ enable_ceilometer | bool }}
|
|
|
|
- enable_chrony_{{ enable_chrony | bool }}
|
|
|
|
- enable_cinder_{{ enable_cinder | bool }}
|
|
|
|
- enable_cloudkitty_{{ enable_cloudkitty | bool }}
|
|
|
|
- enable_collectd_{{ enable_collectd | bool }}
|
2018-12-02 21:17:07 +08:00
|
|
|
- enable_cyborg_{{ enable_cyborg | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_designate_{{ enable_designate | bool }}
|
|
|
|
- enable_elasticsearch_{{ enable_elasticsearch | bool }}
|
|
|
|
- enable_etcd_{{ enable_etcd | bool }}
|
|
|
|
- enable_freezer_{{ enable_freezer | bool }}
|
|
|
|
- enable_glance_{{ enable_glance | bool }}
|
|
|
|
- enable_gnocchi_{{ enable_gnocchi | bool }}
|
|
|
|
- enable_grafana_{{ enable_grafana | bool }}
|
|
|
|
- enable_haproxy_{{ enable_haproxy | bool }}
|
|
|
|
- enable_heat_{{ enable_heat | bool }}
|
|
|
|
- enable_horizon_{{ enable_horizon | bool }}
|
|
|
|
- enable_hyperv_{{ enable_hyperv | bool }}
|
|
|
|
- enable_influxdb_{{ enable_influxdb | bool }}
|
|
|
|
- enable_ironic_{{ enable_ironic | bool }}
|
|
|
|
- enable_iscsid_{{ enable_iscsid | bool }}
|
|
|
|
- enable_kafka_{{ enable_kafka | bool }}
|
|
|
|
- enable_karbor_{{ enable_karbor | bool }}
|
|
|
|
- enable_keystone_{{ enable_keystone | bool }}
|
|
|
|
- enable_kibana_{{ enable_kibana | bool }}
|
|
|
|
- enable_kuryr_{{ enable_kuryr | bool }}
|
|
|
|
- enable_magnum_{{ enable_magnum | bool }}
|
|
|
|
- enable_manila_{{ enable_manila | bool }}
|
|
|
|
- enable_mariadb_{{ enable_mariadb | bool }}
|
2018-11-06 10:20:02 +07:00
|
|
|
- enable_masakari_{{ enable_masakari | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_memcached_{{ enable_memcached | bool }}
|
|
|
|
- enable_mistral_{{ enable_mistral | bool }}
|
|
|
|
- enable_monasca_{{ enable_monasca | bool }}
|
|
|
|
- enable_multipathd_{{ enable_multipathd | bool }}
|
|
|
|
- enable_murano_{{ enable_murano | bool }}
|
|
|
|
- enable_neutron_{{ enable_neutron | bool }}
|
|
|
|
- enable_nova_{{ enable_nova | bool }}
|
|
|
|
- enable_octavia_{{ enable_octavia | bool }}
|
|
|
|
- enable_openvswitch_{{ enable_openvswitch | bool }}_enable_ovs_dpdk_{{ enable_ovs_dpdk | bool }}
|
|
|
|
- enable_outward_rabbitmq_{{ enable_outward_rabbitmq | bool }}
|
2019-12-20 11:35:35 +01:00
|
|
|
- enable_ovn_{{ enable_ovn | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_panko_{{ enable_panko | bool }}
|
2018-10-26 18:13:48 +02:00
|
|
|
- enable_placement_{{ enable_placement | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_prometheus_{{ enable_prometheus | bool }}
|
|
|
|
- enable_qdrouterd_{{ enable_qdrouterd | bool }}
|
2019-05-22 13:22:14 -04:00
|
|
|
- enable_qinling_{{ enable_qinling | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_rabbitmq_{{ enable_rabbitmq | bool }}
|
|
|
|
- enable_rally_{{ enable_rally | bool }}
|
|
|
|
- enable_redis_{{ enable_redis | bool }}
|
|
|
|
- enable_sahara_{{ enable_sahara | bool }}
|
|
|
|
- enable_searchlight_{{ enable_searchlight | bool }}
|
|
|
|
- enable_senlin_{{ enable_senlin | bool }}
|
|
|
|
- enable_skydive_{{ enable_skydive | bool }}
|
|
|
|
- enable_solum_{{ enable_solum | bool }}
|
2020-06-26 14:42:16 +01:00
|
|
|
- enable_storm_{{ enable_storm | bool }}
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- enable_swift_{{ enable_swift | bool }}
|
|
|
|
- enable_tacker_{{ enable_tacker | bool }}
|
|
|
|
- enable_telegraf_{{ enable_telegraf | bool }}
|
|
|
|
- enable_tempest_{{ enable_tempest | bool }}
|
|
|
|
- enable_trove_{{ enable_trove | bool }}
|
|
|
|
- enable_vitrage_{{ enable_vitrage | bool }}
|
|
|
|
- enable_vmtp_{{ enable_vmtp | bool }}
|
|
|
|
- enable_watcher_{{ enable_watcher | bool }}
|
|
|
|
- enable_zookeeper_{{ enable_zookeeper | bool }}
|
|
|
|
- enable_zun_{{ enable_zun | bool }}
|
|
|
|
tags: always
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role prechecks
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
# Apply only when kolla action is 'precheck'.
|
|
|
|
hosts: kolla_action_precheck
|
2016-11-03 14:48:58 +08:00
|
|
|
roles:
|
|
|
|
- role: prechecks
|
|
|
|
|
2016-10-24 15:16:24 +02:00
|
|
|
- name: Apply role chrony
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-10-24 15:16:24 +02:00
|
|
|
hosts:
|
|
|
|
- chrony-server
|
|
|
|
- chrony
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_chrony_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-10-24 15:16:24 +02:00
|
|
|
roles:
|
|
|
|
- { role: chrony,
|
|
|
|
tags: chrony,
|
|
|
|
when: enable_chrony | bool }
|
|
|
|
|
2018-06-19 00:43:35 -05:00
|
|
|
- name: Apply role haproxy
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- haproxy
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_haproxy_True'
|
2020-01-24 21:29:15 +01:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2019-12-04 11:31:45 +00:00
|
|
|
tags:
|
|
|
|
- haproxy
|
2018-06-19 00:43:35 -05:00
|
|
|
roles:
|
|
|
|
- { role: haproxy,
|
|
|
|
when: enable_haproxy | bool }
|
|
|
|
tasks:
|
|
|
|
- block:
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: aodh
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: aodh
|
|
|
|
when: enable_aodh | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: barbican
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: barbican
|
|
|
|
when: enable_barbican | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: blazar
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: blazar
|
|
|
|
when: enable_blazar | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: cinder
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: cinder
|
|
|
|
when: enable_cinder | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: cloudkitty
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: cloudkitty
|
|
|
|
when: enable_cloudkitty | bool
|
2018-12-02 21:17:07 +08:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: cyborg
|
2018-12-02 21:17:07 +08:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: cyborg
|
|
|
|
when: enable_cyborg | bool
|
2018-06-19 00:43:35 -05:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: designate
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: designate
|
|
|
|
when: enable_designate | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: elasticsearch
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: elasticsearch
|
|
|
|
when: enable_elasticsearch | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: freezer
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: freezer
|
|
|
|
when: enable_freezer | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: glance
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: glance
|
|
|
|
when: enable_glance | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: gnocchi
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: gnocchi
|
|
|
|
when: enable_gnocchi | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: grafana
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: grafana
|
|
|
|
when: enable_grafana | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: heat
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: heat
|
|
|
|
when: enable_heat | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: horizon
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: horizon
|
|
|
|
when: enable_horizon | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: influxdb
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: influxdb
|
|
|
|
when: enable_influxdb | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: ironic
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: ironic
|
|
|
|
when: enable_ironic | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: karbor
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: karbor
|
|
|
|
when: enable_karbor | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: keystone
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: keystone
|
|
|
|
when: enable_keystone | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: kibana
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: kibana
|
|
|
|
when: enable_kibana | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: magnum
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: magnum
|
|
|
|
when: enable_magnum | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: manila
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: manila
|
|
|
|
when: enable_manila | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: mariadb
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: mariadb
|
2019-11-03 23:07:57 +03:00
|
|
|
when: enable_mariadb | bool or enable_external_mariadb_load_balancer | bool
|
2018-11-06 10:20:02 +07:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: masakari
|
2018-11-06 10:20:02 +07:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: masakari
|
|
|
|
when: enable_masakari | bool
|
2018-06-19 00:43:35 -05:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: memcached
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: memcached
|
|
|
|
when: enable_memcached | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: mistral
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: mistral
|
|
|
|
when: enable_mistral | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: monasca
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: monasca
|
|
|
|
when: enable_monasca | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: murano
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: murano
|
|
|
|
when: enable_murano | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: neutron
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: neutron
|
|
|
|
when: enable_neutron | bool
|
2018-10-26 18:13:48 +02:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: placement
|
2018-10-26 18:13:48 +02:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: placement
|
2018-06-19 00:43:35 -05:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: nova
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
2019-08-19 15:52:46 +01:00
|
|
|
tags:
|
|
|
|
- nova
|
|
|
|
- nova-api
|
|
|
|
when: enable_nova | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: nova-cell
|
2019-08-19 15:52:46 +01:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags:
|
|
|
|
- nova
|
|
|
|
- nova-cell
|
2018-06-19 00:43:35 -05:00
|
|
|
when: enable_nova | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: octavia
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: octavia
|
|
|
|
when: enable_octavia | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: panko
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: panko
|
|
|
|
when: enable_panko | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: prometheus
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: prometheus
|
|
|
|
when: enable_prometheus | bool
|
2019-05-22 13:22:14 -04:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: qinling
|
2019-05-22 13:22:14 -04:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: qinling
|
|
|
|
when: enable_qinling | bool
|
2018-06-19 00:43:35 -05:00
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: rabbitmq
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: rabbitmq
|
|
|
|
vars:
|
|
|
|
role_rabbitmq_cluster_cookie:
|
|
|
|
role_rabbitmq_groups:
|
|
|
|
when: enable_rabbitmq | bool or enable_outward_rabbitmq | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: sahara
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: sahara
|
|
|
|
when: enable_sahara | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: searchlight
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: searchlight
|
|
|
|
when: enable_searchlight | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: senlin
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: senlin
|
|
|
|
when: enable_senlin | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: skydive
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: skydive
|
|
|
|
when: enable_skydive | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: solum
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: solum
|
|
|
|
when: enable_solum | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: swift
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: swift
|
|
|
|
when: enable_swift | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: tacker
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: tacker
|
|
|
|
when: enable_tacker | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: trove
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: trove
|
|
|
|
when: enable_trove | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: vitrage
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: vitrage
|
|
|
|
when: enable_vitrage | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: watcher
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: watcher
|
|
|
|
when: enable_watcher | bool
|
|
|
|
- include_role:
|
2020-03-02 10:01:17 +01:00
|
|
|
name: zun
|
2018-06-19 00:43:35 -05:00
|
|
|
tasks_from: loadbalancer
|
|
|
|
tags: zun
|
|
|
|
when: enable_zun | bool
|
|
|
|
when:
|
|
|
|
- enable_haproxy | bool
|
|
|
|
- kolla_action in ['deploy', 'reconfigure', 'upgrade', 'config']
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role collectd
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- collectd
|
|
|
|
- '&enable_collectd_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-09-09 10:20:18 +01:00
|
|
|
roles:
|
|
|
|
- { role: collectd,
|
|
|
|
tags: collectd,
|
|
|
|
when: enable_collectd | bool }
|
|
|
|
|
2018-02-23 17:36:27 +00:00
|
|
|
- name: Apply role zookeeper
|
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- zookeeper
|
|
|
|
- '&enable_zookeeper_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2018-02-23 17:36:27 +00:00
|
|
|
roles:
|
|
|
|
- { role: zookeeper,
|
|
|
|
tags: zookeeper,
|
|
|
|
when: enable_zookeeper | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role elasticsearch
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- elasticsearch
|
|
|
|
- '&enable_elasticsearch_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-03-23 18:16:04 -04:00
|
|
|
roles:
|
|
|
|
- { role: elasticsearch,
|
|
|
|
tags: elasticsearch,
|
2016-05-23 17:45:52 +02:00
|
|
|
when: enable_elasticsearch | bool }
|
2016-03-23 18:16:04 -04:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role influxdb
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- influxdb
|
|
|
|
- '&enable_influxdb_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-27 06:23:29 +00:00
|
|
|
roles:
|
|
|
|
- { role: influxdb,
|
|
|
|
tags: influxdb,
|
|
|
|
when: enable_influxdb | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role telegraf
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-07-23 17:34:03 +00:00
|
|
|
- telegraf
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_telegraf_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-23 17:34:03 +00:00
|
|
|
roles:
|
|
|
|
- { role: telegraf,
|
|
|
|
tags: telegraf,
|
|
|
|
when: enable_telegraf | bool }
|
|
|
|
|
2017-07-21 07:08:54 +00:00
|
|
|
- name: Apply role redis
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- redis
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_redis_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-07-21 07:08:54 +00:00
|
|
|
roles:
|
|
|
|
- { role: redis,
|
|
|
|
tags: redis,
|
|
|
|
when: enable_redis | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role kibana
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- kibana
|
|
|
|
- '&enable_kibana_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-01-20 08:27:00 +01:00
|
|
|
roles:
|
|
|
|
- { role: kibana,
|
|
|
|
tags: kibana,
|
2016-05-23 17:45:52 +02:00
|
|
|
when: enable_kibana | bool }
|
2016-01-14 11:06:29 +01:00
|
|
|
|
2018-06-19 00:43:35 -05:00
|
|
|
- name: Apply role mariadb
|
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- mariadb
|
|
|
|
- '&enable_mariadb_True'
|
2018-06-19 00:43:35 -05:00
|
|
|
roles:
|
|
|
|
- { role: mariadb,
|
|
|
|
tags: mariadb,
|
|
|
|
when: enable_mariadb | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role memcached
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- memcached
|
|
|
|
- '&enable_memcached_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-10-15 08:54:36 +00:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: memcached,
|
|
|
|
tags: [memcache, memcached],
|
|
|
|
when: enable_memcached | bool }
|
2015-10-15 08:54:36 +00:00
|
|
|
|
2017-07-17 15:59:05 +00:00
|
|
|
- name: Apply role prometheus
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- prometheus
|
|
|
|
- prometheus-node-exporter
|
|
|
|
- prometheus-mysqld-exporter
|
|
|
|
- prometheus-haproxy-exporter
|
2018-04-11 09:23:39 -04:00
|
|
|
- prometheus-cadvisor
|
2019-01-16 18:37:16 -03:00
|
|
|
- prometheus-alertmanager
|
|
|
|
- prometheus-openstack-exporter
|
2019-03-05 16:21:08 +03:00
|
|
|
- prometheus-elasticsearch-exporter
|
2019-09-11 11:35:23 +01:00
|
|
|
- prometheus-blackbox-exporter
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_prometheus_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-07-17 15:59:05 +00:00
|
|
|
roles:
|
|
|
|
- { role: prometheus,
|
|
|
|
tags: prometheus,
|
|
|
|
when: enable_prometheus | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role iscsi
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-03-31 10:40:55 -04:00
|
|
|
- iscsid
|
|
|
|
- tgtd
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_iscsid_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-03-31 10:40:55 -04:00
|
|
|
roles:
|
|
|
|
- { role: iscsi,
|
|
|
|
tags: iscsi,
|
2016-08-17 18:07:10 +03:00
|
|
|
when: enable_iscsid | bool }
|
2016-03-31 10:40:55 -04:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role multipathd
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-07-08 13:34:04 -03:00
|
|
|
- multipathd
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_multipathd_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-08 13:34:04 -03:00
|
|
|
roles:
|
|
|
|
- { role: multipathd,
|
|
|
|
tags: multipathd,
|
|
|
|
when: enable_multipathd | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role rabbitmq
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- rabbitmq
|
|
|
|
- '&enable_rabbitmq_True'
|
2015-03-28 17:16:24 -05:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: rabbitmq,
|
|
|
|
tags: rabbitmq,
|
2017-04-05 16:57:35 +01:00
|
|
|
role_rabbitmq_cluster_cookie: '{{ rabbitmq_cluster_cookie }}',
|
|
|
|
role_rabbitmq_cluster_port: '{{ rabbitmq_cluster_port }}',
|
|
|
|
role_rabbitmq_epmd_port: '{{ rabbitmq_epmd_port }}',
|
|
|
|
role_rabbitmq_groups: rabbitmq,
|
|
|
|
role_rabbitmq_management_port: '{{ rabbitmq_management_port }}',
|
2017-11-02 13:32:49 +00:00
|
|
|
role_rabbitmq_monitoring_password: '{{ rabbitmq_monitoring_password }}',
|
|
|
|
role_rabbitmq_monitoring_user: '{{ rabbitmq_monitoring_user }}',
|
2017-04-05 16:57:35 +01:00
|
|
|
role_rabbitmq_password: '{{ rabbitmq_password }}',
|
|
|
|
role_rabbitmq_port: '{{ rabbitmq_port }}',
|
|
|
|
role_rabbitmq_user: '{{ rabbitmq_user }}',
|
2015-11-03 05:47:47 +00:00
|
|
|
when: enable_rabbitmq | bool }
|
2015-05-02 22:45:36 +08:00
|
|
|
|
2017-04-05 16:57:35 +01:00
|
|
|
- name: Apply role rabbitmq (outward)
|
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- outward-rabbitmq
|
|
|
|
- '&enable_outward_rabbitmq_True'
|
2017-04-05 16:57:35 +01:00
|
|
|
roles:
|
|
|
|
- { role: rabbitmq,
|
|
|
|
tags: rabbitmq,
|
|
|
|
project_name: outward_rabbitmq,
|
|
|
|
role_rabbitmq_cluster_cookie: '{{ outward_rabbitmq_cluster_cookie }}',
|
|
|
|
role_rabbitmq_cluster_port: '{{ outward_rabbitmq_cluster_port }}',
|
|
|
|
role_rabbitmq_epmd_port: '{{ outward_rabbitmq_epmd_port }}',
|
|
|
|
role_rabbitmq_groups: outward-rabbitmq,
|
|
|
|
role_rabbitmq_management_port: '{{ outward_rabbitmq_management_port }}',
|
|
|
|
role_rabbitmq_password: '{{ outward_rabbitmq_password }}',
|
|
|
|
role_rabbitmq_port: '{{ outward_rabbitmq_port }}',
|
|
|
|
role_rabbitmq_user: '{{ outward_rabbitmq_user }}',
|
|
|
|
when: enable_outward_rabbitmq | bool }
|
|
|
|
|
2017-05-31 08:50:54 -04:00
|
|
|
- name: Apply role qdrouterd
|
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- qdrouterd
|
|
|
|
- '&enable_qdrouterd_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-05-31 08:50:54 -04:00
|
|
|
roles:
|
|
|
|
- { role: qdrouterd,
|
|
|
|
tags: qdrouterd,
|
|
|
|
when: enable_qdrouterd | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role etcd
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- etcd
|
|
|
|
- '&enable_etcd_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-09-02 00:48:01 -04:00
|
|
|
roles:
|
|
|
|
- { role: etcd,
|
|
|
|
tags: etcd,
|
|
|
|
when: enable_etcd | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role keystone
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- keystone
|
|
|
|
- '&enable_keystone_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-05-02 22:45:36 +08:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: keystone,
|
|
|
|
tags: keystone,
|
|
|
|
when: enable_keystone | bool }
|
2015-07-04 12:47:45 +00:00
|
|
|
|
2018-02-26 12:02:19 +00:00
|
|
|
- name: Apply role kafka
|
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- kafka
|
|
|
|
- '&enable_kafka_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2018-02-26 12:02:19 +00:00
|
|
|
roles:
|
|
|
|
- { role: kafka,
|
|
|
|
tags: kafka,
|
|
|
|
when: enable_kafka | bool }
|
|
|
|
|
2018-06-11 16:50:01 +01:00
|
|
|
- name: Apply role storm
|
2018-11-17 14:42:38 -05:00
|
|
|
gather_facts: false
|
2018-11-06 14:54:20 +00:00
|
|
|
hosts:
|
|
|
|
- storm-worker
|
|
|
|
- storm-nimbus
|
2020-06-26 14:42:16 +01:00
|
|
|
- '&enable_storm_True'
|
2018-06-11 16:50:01 +01:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: storm,
|
|
|
|
tags: storm,
|
|
|
|
when: enable_storm | bool }
|
|
|
|
|
2016-11-25 06:21:35 +08:00
|
|
|
- name: Apply role karbor
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- karbor
|
|
|
|
- '&enable_karbor_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-11-25 06:21:35 +08:00
|
|
|
roles:
|
|
|
|
- { role: karbor,
|
|
|
|
tags: karbor,
|
|
|
|
when: enable_karbor | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role swift
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-10-16 07:58:34 +00:00
|
|
|
- swift-account-server
|
|
|
|
- swift-container-server
|
|
|
|
- swift-object-server
|
|
|
|
- swift-proxy-server
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_swift_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-08-18 14:05:54 +00:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: swift,
|
|
|
|
tags: swift,
|
|
|
|
when: enable_swift | bool }
|
2015-08-18 14:05:54 +00:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role glance
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-11-03 05:47:47 +00:00
|
|
|
- glance-api
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_glance_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-07-04 12:47:45 +00:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: glance,
|
|
|
|
tags: glance,
|
|
|
|
when: enable_glance | bool }
|
2015-07-12 03:02:33 +00:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role ironic
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-10-18 10:58:48 +08:00
|
|
|
- ironic-api
|
|
|
|
- ironic-conductor
|
|
|
|
- ironic-inspector
|
|
|
|
- ironic-pxe
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_ironic_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-10-18 10:58:48 +08:00
|
|
|
roles:
|
|
|
|
- { role: ironic,
|
|
|
|
tags: ironic,
|
|
|
|
when: enable_ironic | bool }
|
|
|
|
|
2017-03-07 22:03:50 +08:00
|
|
|
- name: Apply role cinder
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- cinder-api
|
|
|
|
- cinder-backup
|
|
|
|
- cinder-scheduler
|
|
|
|
- cinder-volume
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_cinder_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-03-07 22:03:50 +08:00
|
|
|
roles:
|
|
|
|
- { role: cinder,
|
|
|
|
tags: cinder,
|
|
|
|
when: enable_cinder | bool }
|
|
|
|
|
2018-10-26 18:13:48 +02:00
|
|
|
- name: Apply role placement
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- placement-api
|
|
|
|
- '&enable_placement_True'
|
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: placement,
|
|
|
|
tags: placement,
|
|
|
|
when: enable_placement | bool }
|
|
|
|
|
2019-08-19 15:52:46 +01:00
|
|
|
# Nova deployment is more complicated than other services, so is covered in its
|
|
|
|
# own playbook.
|
|
|
|
- import_playbook: nova.yml
|
2015-07-13 07:32:29 +00:00
|
|
|
|
2017-03-31 10:42:02 -07:00
|
|
|
- name: Apply role openvswitch
|
2018-01-31 10:56:57 +08:00
|
|
|
gather_facts: false
|
2017-03-31 10:42:02 -07:00
|
|
|
hosts:
|
|
|
|
- openvswitch
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_openvswitch_True_enable_ovs_dpdk_False'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-03-31 10:42:02 -07:00
|
|
|
roles:
|
|
|
|
- { role: openvswitch,
|
|
|
|
tags: openvswitch,
|
2017-04-06 13:21:09 +00:00
|
|
|
when: "(enable_openvswitch | bool) and not (enable_ovs_dpdk | bool)"}
|
|
|
|
|
|
|
|
- name: Apply role ovs-dpdk
|
2018-01-31 10:56:57 +08:00
|
|
|
gather_facts: false
|
2017-04-06 13:21:09 +00:00
|
|
|
hosts:
|
|
|
|
- openvswitch
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_openvswitch_True_enable_ovs_dpdk_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-04-06 13:21:09 +00:00
|
|
|
roles:
|
|
|
|
- { role: ovs-dpdk,
|
|
|
|
tags: ovs-dpdk,
|
|
|
|
when: "(enable_openvswitch | bool) and (enable_ovs_dpdk | bool)"}
|
2017-03-31 10:42:02 -07:00
|
|
|
|
2019-12-20 11:35:35 +01:00
|
|
|
- name: Apply role ovn
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- ovn-controller
|
|
|
|
- ovn-nb-db
|
|
|
|
- ovn-northd
|
|
|
|
- ovn-sb-db
|
|
|
|
- '&enable_ovn_True'
|
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: ovn,
|
|
|
|
tags: ovn,
|
|
|
|
when: enable_ovn | bool }
|
|
|
|
|
2017-09-29 11:40:47 +02:00
|
|
|
- name: Apply role nova-hyperv
|
2017-05-30 14:16:34 +03:00
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- hyperv
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_hyperv_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-05-30 14:16:34 +03:00
|
|
|
roles:
|
|
|
|
- { role: nova-hyperv,
|
|
|
|
tags: nova-hyperv,
|
|
|
|
when: enable_hyperv | bool }
|
|
|
|
|
2017-09-29 11:39:01 +02:00
|
|
|
# NOTE(gmmaha): Please do not change the order listed here. The current order is a
|
2016-04-01 13:55:14 -07:00
|
|
|
# workaround to fix the bug https://bugs.launchpad.net/kolla/+bug/1546789
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role neutron
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-03-31 04:04:27 -04:00
|
|
|
- neutron-server
|
2016-01-26 19:50:43 +00:00
|
|
|
- neutron-dhcp-agent
|
|
|
|
- neutron-l3-agent
|
2018-07-18 15:20:08 +00:00
|
|
|
- ironic-neutron-agent
|
2016-01-26 19:50:43 +00:00
|
|
|
- neutron-metadata-agent
|
2019-12-20 11:35:35 +01:00
|
|
|
- neutron-ovn-metadata-agent
|
2017-04-15 19:51:16 +08:00
|
|
|
- neutron-metering-agent
|
2016-03-31 04:04:27 -04:00
|
|
|
- compute
|
|
|
|
- manila-share
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_neutron_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-07-13 07:32:29 +00:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: neutron,
|
|
|
|
tags: neutron,
|
|
|
|
when: enable_neutron | bool }
|
2015-08-04 07:39:22 +00:00
|
|
|
|
2017-02-13 14:06:24 -08:00
|
|
|
- name: Apply role kuryr
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2017-02-13 14:06:24 -08:00
|
|
|
hosts:
|
|
|
|
- compute
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_kuryr_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-02-13 14:06:24 -08:00
|
|
|
roles:
|
|
|
|
- { role: kuryr,
|
|
|
|
tags: kuryr,
|
|
|
|
when: enable_kuryr | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role heat
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-11-03 05:47:47 +00:00
|
|
|
- heat-api
|
|
|
|
- heat-api-cfn
|
|
|
|
- heat-engine
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_heat_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-08-02 12:26:30 -07:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: heat,
|
|
|
|
tags: heat,
|
|
|
|
when: enable_heat | bool }
|
2015-08-02 12:26:30 -07:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role horizon
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-03-19 16:42:26 +00:00
|
|
|
- horizon
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_horizon_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-08-10 14:08:59 -04:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: horizon,
|
|
|
|
tags: horizon,
|
|
|
|
when: enable_horizon | bool }
|
2015-08-28 10:49:29 +01:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role murano
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-11-03 05:47:47 +00:00
|
|
|
- murano-api
|
|
|
|
- murano-engine
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_murano_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-08-28 10:49:29 +01:00
|
|
|
roles:
|
2015-11-03 05:47:47 +00:00
|
|
|
- { role: murano,
|
|
|
|
tags: murano,
|
|
|
|
when: enable_murano | bool }
|
2015-08-28 11:26:40 -04:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role solum
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-11-25 06:14:51 +08:00
|
|
|
- solum-api
|
|
|
|
- solum-worker
|
|
|
|
- solum-deployer
|
|
|
|
- solum-conductor
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_solum_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-11-25 06:14:51 +08:00
|
|
|
roles:
|
|
|
|
- { role: solum,
|
|
|
|
tags: solum,
|
|
|
|
when: enable_solum | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role magnum
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-10-17 18:13:51 +02:00
|
|
|
- magnum-api
|
|
|
|
- magnum-conductor
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_magnum_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-10-17 18:13:51 +02:00
|
|
|
roles:
|
|
|
|
- { role: magnum,
|
|
|
|
tags: magnum,
|
|
|
|
when: enable_magnum | bool }
|
2015-12-28 08:38:30 +09:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role mistral
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2015-12-28 08:38:30 +09:00
|
|
|
- mistral-api
|
|
|
|
- mistral-engine
|
|
|
|
- mistral-executor
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_mistral_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2015-12-28 08:38:30 +09:00
|
|
|
roles:
|
|
|
|
- { role: mistral,
|
|
|
|
tags: mistral,
|
|
|
|
when: enable_mistral | bool }
|
2016-01-18 21:00:45 +00:00
|
|
|
|
2019-05-22 13:22:14 -04:00
|
|
|
- name: Apply role qinling
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- qinling-api
|
|
|
|
- qinling-engine
|
|
|
|
- '&enable_qinling_True'
|
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: qinling,
|
|
|
|
tags: qinling,
|
|
|
|
when: enable_qinling | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role sahara
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-04 17:20:20 +00:00
|
|
|
- sahara-api
|
|
|
|
- sahara-engine
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_sahara_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-04 17:20:20 +00:00
|
|
|
roles:
|
|
|
|
- { role: sahara,
|
|
|
|
tags: sahara,
|
|
|
|
when: enable_sahara | bool }
|
|
|
|
|
2017-02-27 17:01:55 +08:00
|
|
|
- name: Apply role panko
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- panko-api
|
|
|
|
- '&enable_panko_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-02-27 17:01:55 +08:00
|
|
|
roles:
|
|
|
|
- { role: panko,
|
|
|
|
tags: panko,
|
|
|
|
when: enable_panko | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role manila
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-03-01 10:46:48 -05:00
|
|
|
- manila-api
|
2016-10-20 17:19:47 -03:00
|
|
|
- manila-data
|
2016-03-01 10:46:48 -05:00
|
|
|
- manila-share
|
|
|
|
- manila-scheduler
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_manila_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-03-01 10:46:48 -05:00
|
|
|
roles:
|
|
|
|
- { role: manila,
|
|
|
|
tags: manila,
|
|
|
|
when: enable_manila | bool }
|
2016-03-29 13:25:43 -04:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role gnocchi
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-01 05:17:43 +00:00
|
|
|
- gnocchi-api
|
|
|
|
- gnocchi-metricd
|
|
|
|
- gnocchi-statsd
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_gnocchi_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-01 05:17:43 +00:00
|
|
|
roles:
|
|
|
|
- { role: gnocchi,
|
|
|
|
tags: gnocchi,
|
|
|
|
when: enable_gnocchi | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role ceilometer
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2017-03-13 13:26:43 +08:00
|
|
|
vars_files:
|
|
|
|
- "roles/panko/defaults/main.yml"
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2018-05-30 10:58:02 -04:00
|
|
|
- ceilometer-central
|
|
|
|
- ceilometer-notification
|
|
|
|
- ceilometer-compute
|
2018-04-29 13:15:02 +08:00
|
|
|
- ceilometer-ipmi
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_ceilometer_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-03-29 13:25:43 -04:00
|
|
|
roles:
|
|
|
|
- { role: ceilometer,
|
|
|
|
tags: ceilometer,
|
|
|
|
when: enable_ceilometer | bool }
|
2016-05-26 20:58:03 +08:00
|
|
|
|
2018-03-28 17:54:19 +01:00
|
|
|
- name: Apply role monasca
|
|
|
|
gather_facts: false
|
2018-07-26 16:51:14 +01:00
|
|
|
hosts:
|
|
|
|
- monasca
|
|
|
|
- monasca-agent
|
2018-07-18 22:59:29 +08:00
|
|
|
- monasca-api
|
|
|
|
- monasca-grafana
|
|
|
|
- monasca-log-transformer
|
|
|
|
- monasca-log-persister
|
|
|
|
- monasca-log-metrics
|
|
|
|
- monasca-thresh
|
|
|
|
- monasca-notification
|
|
|
|
- monasca-persister
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_monasca_True'
|
2018-03-28 17:54:19 +01:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: monasca,
|
|
|
|
tags: monasca,
|
|
|
|
when: enable_monasca | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role aodh
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2018-10-04 10:53:49 -04:00
|
|
|
- aodh-api
|
|
|
|
- aodh-evaluator
|
|
|
|
- aodh-listener
|
|
|
|
- aodh-notifier
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_aodh_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-04 06:51:11 +00:00
|
|
|
roles:
|
|
|
|
- { role: aodh,
|
|
|
|
tags: aodh,
|
|
|
|
when: enable_aodh | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role barbican
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-08 16:48:11 +00:00
|
|
|
- barbican-api
|
|
|
|
- barbican-keystone-listener
|
|
|
|
- barbican-worker
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_barbican_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-08 16:48:11 +00:00
|
|
|
roles:
|
|
|
|
- { role: barbican,
|
|
|
|
tags: barbican,
|
|
|
|
when: enable_barbican | bool }
|
|
|
|
|
2018-12-02 21:17:07 +08:00
|
|
|
- name: Apply role cyborg
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- cyborg
|
|
|
|
- '&enable_cyborg_True'
|
|
|
|
serial: '{{ serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: cyborg,
|
|
|
|
tags: cyborg,
|
|
|
|
when: enable_cyborg | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role tempest
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-05-26 20:58:03 +08:00
|
|
|
- tempest
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_tempest_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-05-26 20:58:03 +08:00
|
|
|
roles:
|
|
|
|
- { role: tempest,
|
|
|
|
tags: tempest,
|
|
|
|
when: enable_tempest | bool }
|
2016-07-05 09:47:21 +01:00
|
|
|
|
2016-08-10 15:48:32 +10:00
|
|
|
- name: Apply role designate
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-08-10 15:48:32 +10:00
|
|
|
hosts:
|
|
|
|
- designate-api
|
|
|
|
- designate-central
|
2017-12-06 15:14:38 +08:00
|
|
|
- designate-producer
|
2016-08-10 15:48:32 +10:00
|
|
|
- designate-mdns
|
|
|
|
- designate-worker
|
|
|
|
- designate-sink
|
2017-12-06 21:52:38 +08:00
|
|
|
- designate-backend-bind9
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_designate_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-10 15:48:32 +10:00
|
|
|
roles:
|
|
|
|
- { role: designate,
|
|
|
|
tags: designate,
|
|
|
|
when: enable_designate | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role rally
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
hosts:
|
|
|
|
- rally
|
|
|
|
- '&enable_rally_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-31 07:46:41 +00:00
|
|
|
roles:
|
|
|
|
- { role: rally,
|
|
|
|
tags: rally,
|
|
|
|
when: enable_rally | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role vmtp
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-21 01:01:04 -05:00
|
|
|
- vmtp
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_vmtp_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-21 01:01:04 -05:00
|
|
|
roles:
|
|
|
|
- { role: vmtp,
|
|
|
|
tags: vmtp,
|
|
|
|
when: enable_vmtp | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role trove
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-12 18:43:02 +00:00
|
|
|
- trove-api
|
|
|
|
- trove-conductor
|
|
|
|
- trove-taskmanager
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_trove_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-12 18:43:02 +00:00
|
|
|
roles:
|
|
|
|
- { role: trove,
|
|
|
|
tags: trove,
|
|
|
|
when: enable_trove | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role watcher
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-07-05 09:47:21 +01:00
|
|
|
- watcher-api
|
|
|
|
- watcher-engine
|
|
|
|
- watcher-applier
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_watcher_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-05 09:47:21 +01:00
|
|
|
roles:
|
|
|
|
- { role: watcher,
|
|
|
|
tags: watcher,
|
|
|
|
when: enable_watcher | bool }
|
2016-08-01 02:27:39 +02:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role grafana
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-07-27 07:02:04 +00:00
|
|
|
- grafana
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_grafana_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-27 07:02:04 +00:00
|
|
|
roles:
|
|
|
|
- { role: grafana,
|
|
|
|
tags: grafana,
|
|
|
|
when: enable_grafana | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role cloudkitty
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-08-01 02:27:39 +02:00
|
|
|
- cloudkitty-api
|
|
|
|
- cloudkitty-processor
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_cloudkitty_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-08-01 02:27:39 +02:00
|
|
|
roles:
|
|
|
|
- { role: cloudkitty,
|
|
|
|
tags: cloudkitty,
|
|
|
|
when: enable_cloudkitty | bool }
|
2016-07-08 19:12:50 +02:00
|
|
|
|
2017-01-19 18:42:12 +08:00
|
|
|
- name: Apply role freezer
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2017-01-19 18:42:12 +08:00
|
|
|
hosts:
|
|
|
|
- freezer-api
|
2018-05-30 10:30:33 -04:00
|
|
|
- freezer-scheduler
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_freezer_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-01-19 18:42:12 +08:00
|
|
|
roles:
|
|
|
|
- { role: freezer,
|
|
|
|
tags: freezer,
|
|
|
|
when: enable_freezer | bool }
|
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role senlin
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-07-08 19:12:50 +02:00
|
|
|
- senlin-api
|
2019-11-04 22:13:01 -08:00
|
|
|
- senlin-conductor
|
2016-07-08 19:12:50 +02:00
|
|
|
- senlin-engine
|
2019-11-04 22:13:01 -08:00
|
|
|
- senlin-health-manager
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_senlin_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-07-08 19:12:50 +02:00
|
|
|
roles:
|
|
|
|
- { role: senlin,
|
|
|
|
tags: senlin,
|
|
|
|
when: enable_senlin | bool }
|
2016-10-18 16:36:05 +08:00
|
|
|
|
2016-11-30 16:23:36 +01:00
|
|
|
- name: Apply role searchlight
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:23:36 +01:00
|
|
|
hosts:
|
2016-10-18 16:36:05 +08:00
|
|
|
- searchlight-api
|
|
|
|
- searchlight-listener
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_searchlight_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-10-18 16:36:05 +08:00
|
|
|
roles:
|
|
|
|
- { role: searchlight,
|
|
|
|
tags: searchlight,
|
|
|
|
when: enable_searchlight | bool }
|
2016-11-15 18:51:52 +00:00
|
|
|
|
2016-12-07 00:30:14 +08:00
|
|
|
- name: Apply role tacker
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2017-06-26 14:35:05 +02:00
|
|
|
hosts:
|
|
|
|
- tacker-server
|
|
|
|
- tacker-conductor
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_tacker_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-11-15 18:51:52 +00:00
|
|
|
roles:
|
|
|
|
- { role: tacker,
|
|
|
|
tags: tacker,
|
|
|
|
when: enable_tacker | bool }
|
2016-11-30 16:45:00 +08:00
|
|
|
|
|
|
|
- name: Apply role octavia
|
2017-02-24 11:59:15 +00:00
|
|
|
gather_facts: false
|
2016-11-30 16:45:00 +08:00
|
|
|
hosts:
|
|
|
|
- octavia-api
|
|
|
|
- octavia-health-manager
|
|
|
|
- octavia-housekeeping
|
|
|
|
- octavia-worker
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_octavia_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2016-12-13 21:57:03 +08:00
|
|
|
roles:
|
|
|
|
- { role: octavia,
|
|
|
|
tags: octavia,
|
|
|
|
when: enable_octavia | bool }
|
2017-01-03 08:45:53 +00:00
|
|
|
|
|
|
|
- name: Apply role zun
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- zun-api
|
2018-10-22 22:16:43 +08:00
|
|
|
- zun-wsproxy
|
2017-01-03 08:45:53 +00:00
|
|
|
- zun-compute
|
2020-02-17 16:45:33 +00:00
|
|
|
- zun-cni-daemon
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_zun_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-01-03 08:45:53 +00:00
|
|
|
roles:
|
|
|
|
- { role: zun,
|
|
|
|
tags: zun,
|
|
|
|
when: enable_zun | bool }
|
2017-04-28 12:21:50 +02:00
|
|
|
|
|
|
|
- name: Apply role skydive
|
2017-06-15 17:47:07 +02:00
|
|
|
gather_facts: false
|
2017-04-28 12:21:50 +02:00
|
|
|
hosts:
|
|
|
|
- skydive-agent
|
|
|
|
- skydive-analyzer
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_skydive_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-04-28 12:21:50 +02:00
|
|
|
roles:
|
|
|
|
- { role: skydive,
|
|
|
|
tags: skydive,
|
|
|
|
when: enable_skydive | bool }
|
2017-02-11 17:01:19 +00:00
|
|
|
|
|
|
|
- name: Apply role vitrage
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- vitrage-api
|
|
|
|
- vitrage-graph
|
|
|
|
- vitrage-notifier
|
|
|
|
- vitrage-ml
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_vitrage_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-02-11 17:01:19 +00:00
|
|
|
roles:
|
|
|
|
- { role: vitrage,
|
|
|
|
tags: vitrage,
|
|
|
|
when: enable_vitrage | bool }
|
2017-04-07 16:01:07 +01:00
|
|
|
|
|
|
|
- name: Apply role blazar
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- blazar-api
|
|
|
|
- blazar-manager
|
Scalability improvements for disabled services
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
2018-09-14 16:51:56 -06:00
|
|
|
- '&enable_blazar_True'
|
2018-01-25 11:30:32 +08:00
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
2017-04-07 16:01:07 +01:00
|
|
|
roles:
|
|
|
|
- { role: blazar,
|
|
|
|
tags: blazar,
|
|
|
|
when: enable_blazar | bool }
|
2018-11-06 10:20:02 +07:00
|
|
|
|
|
|
|
- name: Apply role masakari
|
|
|
|
gather_facts: false
|
|
|
|
hosts:
|
|
|
|
- masakari-api
|
|
|
|
- masakari-engine
|
|
|
|
- masakari-monitors
|
|
|
|
- '&enable_masakari_True'
|
|
|
|
serial: '{{ kolla_serial|default("0") }}'
|
|
|
|
roles:
|
|
|
|
- { role: masakari,
|
|
|
|
tags: masakari,
|
|
|
|
when: enable_masakari | bool }
|