Scott Solkhon 6496cfc0ba Support for Ceph and Swift storage networks, and improvements to Swift
In a deployment that has both Ceph or Swift deployed it can be useful to seperate the network traffic.
This change adds support for dedicated storage networks for both Ceph and Swift. By default, the storage hosts are
attached to the following networks:

* Overcloud admin network
* Internal network
* Storage network
* Storage management network

This adds four additional networks, which can be used to seperate the storage network traffic as follows:

* Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage
  data traffic. Defaults to the storage network (storage_net_name).
* Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry
  storage management traffic. Defaults to the storage management network
  (storage_mgmt_net_name).
* Swift storage network (swift_storage_net_name) is used to carry Swift storage data
  traffic. Defaults to the storage network (storage_net_name).
* Swift storage replication network (swift_storage_replication_net_name) is used to
  carry storage management traffic. Defaults to the storage management network
  (storage_mgmt_net_name).

This change also includes several improvements to Swift device management and ring generation.

The device management and ring generation are now separate, with device management occurring during
'kayobe overcloud host configure', and ring generation during a new command, 'kayobe overcloud swift rings generate'.

For the device management, we now use standard Ansible modules rather than commands for device preparation.
File system labels can be configured for each device individually.

For ring generation, all commands are run on a single host, by default a host in the Swift storage group.
A python script runs in one of the kolla Swift containers, which consumes an autogenerated YAML config file that defines
the layout of the rings.

Change-Id: Iedc7535532d706f02d710de69b422abf2f6fe54c
2019-04-24 12:40:20 +00:00

70 lines
2.3 KiB
YAML

---
# We generate a configuration file and execute a python script in a container
# that builds a ring based on the config file contents. Doing it this way
# avoids a large task loop with docker container for each step, which would be
# quite slow.
# Execute the following commands on the ring build host.
- block:
# Facts required for ansible_user_uid and ansible_user_gid.
- name: Gather facts for swift ring build host
setup:
- name: Ensure Swift ring build directory exists
file:
path: "{{ swift_ring_build_path }}"
state: directory
- name: Ensure Swift ring builder script exists
copy:
src: swift-ring-builder.py
dest: "{{ swift_ring_build_path }}"
- name: Ensure Swift ring builder configuration exists
template:
src: swift-ring.yml.j2
dest: "{{ swift_ring_build_path }}/{{ service_name }}-ring.yml"
with_items: "{{ swift_service_names }}"
loop_control:
loop_var: service_name
- name: Ensure Swift rings exist
docker_container:
cleanup: true
command: >-
python {{ swift_container_build_path }}/swift-ring-builder.py
{{ swift_container_build_path }}/{{ item }}-ring.yml
{{ swift_container_build_path }}
{{ item }}
detach: false
image: "{{ swift_ring_build_image }}"
name: "swift_{{ item }}_ring_builder"
user: "{{ ansible_user_uid }}:{{ ansible_user_gid }}"
volumes:
- "{{ swift_ring_build_path }}/:{{ swift_container_build_path }}/"
with_items: "{{ swift_service_names }}"
- name: Ensure Swift ring files are copied
fetch:
src: "{{ swift_ring_build_path }}/{{ item[0] }}.{{ item[1] }}"
dest: "{{ swift_config_path }}/{{ item[0] }}.{{ item[1] }}"
flat: true
mode: 0644
with_nested:
- "{{ swift_service_names }}"
- - ring.gz
- builder
become: true
always:
- name: Remove Swift ring build directory from build host
file:
path: "{{ swift_ring_build_path }}"
state: absent
delegate_to: "{{ swift_ring_build_host }}"
vars:
# NOTE: Without this, the seed's ansible_host variable will not be
# respected when using delegate_to.
ansible_host: "{{ hostvars[swift_ring_build_host].ansible_host | default(swift_ring_build_host) }}"