Retire repository

Change-Id: I8dce2ae248323fc13707bcf94ccdff957d6e5f69
This commit is contained in:
Mohammed Naser 2022-09-07 17:17:07 -04:00
parent e658866811
commit ff0199a230
350 changed files with 5 additions and 15274 deletions

View File

@ -1,6 +0,0 @@
---
exclude_paths:
- roles/kube_prometheus_stack/files/
warn_list:
- yaml[line-length]

13
.gitignore vendored
View File

@ -1,13 +0,0 @@
.tox
.vscode
doc/build/*
doc/source/roles/*/defaults
molecule/default/group_vars/*
!molecule/default/group_vars/.gitkeep
!molecule/default/group_vars/all
molecule/default/group_vars/all/*
!molecule/default/group_vars/all/molecule.yml
molecule/default/host_vars/*
!molecule/default/host_vars/.gitkeep
galaxy.yml
*.tar.gz

5
README.md Normal file
View File

@ -0,0 +1,5 @@
# Atmosphere
This project is moved to [GitHub](https://github.com/vexxhost/atmosphere).
For any further questions, please file an [issue on GitHub](https://github.com/vexxhost/atmosphere/issues).

View File

@ -1,6 +0,0 @@
ansible-core
sphinx
sphinx_rtd_theme
reno[sphinx]
https://github.com/ypid/yaml4rst/archive/master.tar.gz
https://github.com/debops/yaml2rst/archive/master.tar.gz

View File

@ -1,11 +0,0 @@
---
# .. vim: foldmarker=[[[,]]]:foldmethod=marker
# .. Copyright (C) 2022 VEXXHOST, Inc.
# .. SPDX-License-Identifier: Apache-2.0
# Default variables
# =================
# .. contents:: Sections
# :local:

View File

@ -1,110 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- yaml2rst ----------------------------------------------------------------
import os
import glob
import yaml2rst
from yaml4rst.reformatter import YamlRstReformatter
import pathlib
for defaults_file in glob.glob("../../roles/*/defaults/main.yml"):
role_name = defaults_file.split("/")[-3]
YamlRstReformatter._HEADER_END_LINES = {
'yaml4rst': [
'# Default variables',
'# :local:',
'# .. contents:: Sections',
'# .. include:: includes/all.rst',
'# .. include:: includes/role.rst',
'# .. include:: ../../../includes/global.rst',
'# -----------------',
],
}
reformatter = YamlRstReformatter(
preset='yaml4rst',
template_path=os.path.join(
os.path.abspath(os.path.dirname(__file__)),
'_templates',
),
config={
'ansible_full_role_name': f"vexxhost.atmosphere.{role_name}",
'ansible_role_name': role_name,
}
)
reformatter.read_file(defaults_file)
reformatter.reformat()
reformatter.write_file(
output_file=defaults_file,
only_if_changed=True,
)
pathlib.Path(f"roles/{role_name}/defaults").mkdir(parents=True, exist_ok=True)
rst_content = yaml2rst.convert_file(
defaults_file,
f"roles/{role_name}/defaults/main.rst",
strip_regex=r'\s*(:?\[{3}|\]{3})\d?$',
yaml_strip_regex=r'^\s{66,67}#\s\]{3}\d?$',
)
# -- Project information -----------------------------------------------------
project = 'Atmosphere'
copyright = '2022, VEXXHOST, Inc.'
author = 'VEXXHOST, Inc.'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

View File

@ -1,23 +0,0 @@
.. Atmosphere documentation master file, created by
sphinx-quickstart on Sun Mar 13 17:40:34 2022.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Atmosphere's documentation!
======================================
.. toctree::
:maxdepth: 1
:caption: Contents:
user/index
roles/index
releasenotes
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,4 +0,0 @@
Release Notes
=============
.. release-notes::

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``build_openstack_requirements``
================================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``csi``
=======
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``ceph_mon``
============
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``ceph_osd``
============
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``ceph_repository``
===================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``cert_manager``
================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``containerd``
==============
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``ceph_csi_rbd``
================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``helm``
========
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,11 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
Role reference
==============
.. toctree::
:maxdepth: 1
:glob:
*/index

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``ipmi_exporter``
=================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``keepalived``
================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``kube_prometheus_stack``
=========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``kubernetes``
==============
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_cli``
=================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_exporter``
======================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_barbican``
===========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_cinder``
=========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_endpoints``
============================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_glance``
=========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_heat``
=======================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_horizon``
==========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_infra_ceph_provisioners``
==========================================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_infra_libvirt``
================================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_infra_memcached``
==================================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_infra_openvswitch``
====================================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_ingress``
==========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_keystone``
===========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_neutron``
==========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_nova``
=======================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_placement``
============================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_senlin``
=========================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,10 +0,0 @@
.. Copyright (C) 2022 VEXXHOST, Inc.
.. SPDX-License-Identifier: Apache-2.0
``openstack_helm_tempest``
============================
.. toctree::
:maxdepth: 2
defaults/main

View File

@ -1,7 +0,0 @@
User Guide
==========
.. toctree::
:maxdepth: 1
quickstart

View File

@ -1,99 +0,0 @@
Quickstart
==========
The quick start intends to provide the most near-production experience possible,
as it is architected purely towards production-only environments. In order to
get a quick production-ready experience of Atmosphere, you will need access to
an OpenStack cloud.
The quick start is powered by Molecule and it is used in continuous integration
running against the VEXXHOST public cloud so that would be an easy target to
use to try it out.
You will need the following quotas set up in your cloud account:
* 8 instances
* 32 cores
* 128GB RAM
* 360GB storage
These resources will be used to create a total of 8 instances broken up as
follows:
* 3 Controller nodes
* 3 Ceph OSD nodes
* 2 Compute nodes
First of all, you'll have to make sure you clone the repository locally to your
system with ``git`` by running the following command::
$ git clone https://opendev.org/vexxhost/ansible-collection-atmosphere
You will need ``tox`` installed on your operating system. You will need to make
sure that you have the appropriate OpenStack environment variables set (such
as ``OS_CLOUD`` or ``OS_AUTH_URL``, etc.). You can also use the following
environment variables to tweak the behaviour of the Heat stack that is created:
``ATMOSPHERE_STACK_NAME``
The name of the Heat stack to be created (defaults to ``atmosphere``).
``ATMOSPHERE_PUBLIC_NETWORK``
The name of the public network to attach floating IPs from (defaults to
``public``).
``ATMOSPHERE_IMAGE``
The name or UUID of the image to be used for deploying the instances (
defaults to ``Ubuntu 20.04.3 LTS (x86_64) [2021-10-04]``).
``ATMOSPHERE_INSTANCE_TYPE``
The instance type used to deploy all of the different instances (defaults
to ``v3-standard-4``).
``ATMOSPHERE_NAMESERVERS``
A comma-separated list of nameservers to be used for the instances (defaults
to `1.1.1.1`).
``ATMOSPHERE_USERNAME``
The username what is used to login into the instances (defaults to ``ubuntu``).
``ATMOSPHERE_DNS_SUFFIX_NAME``
The DNS domainname that is used for the API and Horizon. (defaults
to ``nip.io``).
``ATMOSPHERE_ACME_SERVER``
The ACME server, currenly this is from Letsencrypt, with
StepCA from smallstep it is possible to run a internal ACME server.
The CA of that ACME server should be present in the instance image.
Once you're ready to get started, you can run the following command to build
the Heat stack and ::
$ tox -e molecule -- converge
This will create a Heat stack with the name ``atmosphere`` and start deploying
the cloud. Once it's complete, you can login to any of the systems by using
the ``login`` sub-command. For exampel, to login to the first controller node,
you can run the following::
$ tox -e molecule -- login -h ctl1
In all the controllers, you will find an ``openrc`` file location inside the
``root`` account home directory, as well as the OpenStack client installed there
as well. You can use it by running the following after logging in::
$ source /root/openrc
$ openstack server list
The Kubernetes administrator configuration will also be available on all of the
control plane nodes, you can simply use it by running ``kubectl`` commands on
any of the controllers as ``root``::
$ kubectl get nodes -owide
Once you're done with your environment and you need to tear it down, you can
use the ``destroy`` sub-command::
$ tox -e molecule -- destroy
For more information about the different commands used by Molecule, you can
refer to the Molecule documentation.

View File

@ -1,60 +0,0 @@
# Certificates
## Using LetsEncrypt DNS challenges
### RFC2136
If you have DNS server that supports RFC2136, you can use it to solve the DNS
challenges, you'll need to have the following information:
- Email address
- Nameserver IP address
- TSIG Algorithm
- TSIG Key Name
- TSIG Key Secret
You'll need to update your Ansible inventory to be the following:
```yaml
cert_manager_issuer:
acme:
email: <EMAIL>
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
rfc2136:
nameserver: <NS>:<PORT>
tsigAlgorithm: <ALGORITHM>
tsigKeyName: <NAME>
tsigSecretSecretRef:
key: tsig-secret-key
name: tsig-secret
```
After you're done, you'll need to add a new secret to the Kubernetes cluster,
you will need to do it by using the following YAML file:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: tsig-secret
namespace: openstack
type: Opaque
stringData:
tsig-secret-key: <KEY>
```
## Using self-signed certificates
If you are in an environment which does not have a trusted certificate authority
and it does not have access to the internet to be able to use LetsEncrypt, you
can use self-signed certificates by adding the following to your inventory:
```yaml
cert_manager_issuer:
ca:
secretName: root-secret
```

View File

@ -1,113 +0,0 @@
# Storage
## External storage
When using an external storage platform, it's important to create to disable Ceph
globally by adding the following to your Ansible inventory:
```yaml
atmosphere_ceph_enabled: false
```
### Dell PowerStore
In order to be able to use Dell PowerStore, you'll need to make sure that you
setup the hosts inside of your storage array. You'll also need to make sure
that they are not inside a host group or otherwise individual attachments will
not work.
### CSI
You'll need to enable the Kubernetes cluster to use the PowerStore driver by
using adding the following YAML to your Ansible inventory:
```yaml
csi_driver: powerstore
powerstore_csi_config:
arrays:
- endpoint: https://<FILL IN>/api/rest
globalID: <FILL IN>
username: <FILL IN>
password: <FILL IN>
skipCertificateValidation: true
isDefault: true
blockProtocol: <FILL IN> # FC or iSCSI
```
### Glance
Since Glance does not have a native PowerStore driver, you'll need to enable
the use of the Cinder driver by adding the following to your Ansible inventory:
```yaml
openstack_helm_glance_values:
storage: cinder
conf:
glance:
glance_store:
stores: cinder
default_store: cinder
```
Please note that Glance images will not function until the Cinder service is
deployed.
### Cinder
You can enable the native PowerStore driver for Cinder with the following
configuration inside your Ansible inventory:
```yaml
openstack_helm_cinder_values:
storage: powerstore
dependencies:
static:
api:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
scheduler:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
volume_usage_audit:
jobs:
- cinder-db-sync
- cinder-ks-user
- cinder-ks-endpoints
- cinder-rabbit-init
conf:
cinder:
DEFAULT:
enabled_backends: powerstore
default_volume_type: powerstore
backends:
rbd1: null
powerstore:
volume_backend_name: powerstore
volume_driver: cinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriver
san_ip: <FILL IN>
san_login: <FILL IN>
san_password: <FILL IN>
storage_protocol: <FILL IN> # FC or iSCSI
manifests:
deployment_backup: true
job_backup_storage_init: true
job_storage_init: false
```
It's important to note that the configuration above will disable the Cinder
backup service. In the future, we'll update this sample configuration to use
the Cinder backup service.

View File

@ -1,28 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
registry: us-docker.pkg.dev/vexxhost-infra/openstack
projects:
tempest:
branch: master
revision: 44dac69eb77d78a0de8e68e63617099249345578
tag: 30.1.0-5
dist_packages:
- iputils-ping
pip_packages:
- keystone-tempest-plugin
- cinder-tempest-plugin
- neutron-tempest-plugin
- heat-tempest-plugin

View File

@ -1,15 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- import_playbook: vexxhost.atmosphere.site

View File

@ -1,110 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- import_playbook: vexxhost.atmosphere.generate_workspace
vars:
workspace_path: "{{ lookup('env', 'MOLECULE_SCENARIO_DIRECTORY') }}"
domain_name: "{{ '{{' }} hostvars['ctl1']['ansible_host'].replace('.', '-') {{ '}}' }}.{{ lookup('env', 'ATMOSPHERE_DNS_SUFFIX_NAME') | default('nip.io', True) }}"
- hosts: localhost
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"
vars:
ssh_port: 22
identity_file: "{{ lookup('env', 'MOLECULE_EPHEMERAL_DIRECTORY') }}/id_rsa"
stack_name: "{{ lookup('env', 'ATMOSPHERE_STACK_NAME') | default('atmosphere', True) }}"
public_network: "{{ lookup('env', 'ATMOSPHERE_PUBLIC_NETWORK') | default('public', True) }}"
image: "{{ lookup('env', 'ATMOSPHERE_IMAGE') | default('Ubuntu 20.04.3 LTS (x86_64) [2021-10-04]', True) }}"
instance_type: "{{ lookup('env', 'ATMOSPHERE_INSTANCE_TYPE') | default('v3-standard-4', True) }}"
nameservers: "{{ lookup('env', 'ATMOSPHERE_NAMESERVERS') | default('1.1.1.1', True) }}"
boot_from_volume: "{{ lookup('env', 'ATMOSPHERE_BOOT_FROM_VOLUME') | bool }}"
tasks:
- name: create stack
openstack.cloud.stack:
name: "{{ stack_name }}"
template: heat/stack.yaml
parameters:
public_network: "{{ public_network }}"
image: "{{ image }}"
instance_type: "{{ instance_type }}"
nameservers: "{{ nameservers }}"
boot_from_volume: "{{ boot_from_volume }}"
register: _os_stack
- debug:
msg: "{{ _os_stack.stack }}"
- name: grab list of all ip addresses
ansible.builtin.set_fact:
key_pair: "{{ _os_stack.stack.outputs | json_query(key_query) | first }}"
controller_ips: "{{ _os_stack.stack.outputs | community.general.json_query(controller_query) | first }}"
storage_ips: "{{ _os_stack.stack.outputs | community.general.json_query(storage_query) | first }}"
compute_ips: "{{ _os_stack.stack.outputs | community.general.json_query(compute_query) | first }}"
vars:
key_query: "[?output_key=='key_pair'].output_value"
controller_query: "[?output_key=='controller_floating_ip_addresses'].output_value"
storage_query: "[?output_key=='storage_floating_ip_addresses'].output_value"
compute_query: "[?output_key=='compute_floating_ip_addresses'].output_value"
- name: wait for systems to go up
ansible.builtin.wait_for:
port: "22"
host: "{{ item }}"
search_regex: SSH
timeout: 600
retries: 15
delay: 10
loop: "{{ controller_ips + storage_ips + compute_ips }}"
- name: generate private key file
ansible.builtin.copy:
dest: "{{ identity_file }}"
content: "{{ key_pair }}"
mode: 0600
- name: generate instance config file
copy:
content: "{{ instance_config | to_yaml }}"
dest: "{{ molecule_instance_config }}"
vars:
base_instance_config: &instance_config
user: "{{ lookup('env', 'ATMOSPHERE_USERNAME') | default('ubuntu', True) }}"
port: "{{ ssh_port }}"
identity_file: "{{ identity_file }}"
instance_config:
- <<: *instance_config
instance: "ctl1"
address: "{{ controller_ips[0] }}"
- <<: *instance_config
instance: "ctl2"
address: "{{ controller_ips[1] }}"
- <<: *instance_config
instance: "ctl3"
address: "{{ controller_ips[2] }}"
- <<: *instance_config
instance: "nvme1"
address: "{{ storage_ips[0] }}"
- <<: *instance_config
instance: "nvme2"
address: "{{ storage_ips[1] }}"
- <<: *instance_config
instance: "nvme3"
address: "{{ storage_ips[2] }}"
- <<: *instance_config
instance: "kvm1"
address: "{{ compute_ips[0] }}"
- <<: *instance_config
instance: "kvm2"
address: "{{ compute_ips[1] }}"

View File

@ -1,47 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- hosts: localhost
connection: local
gather_facts: false
no_log: "{{ molecule_no_log }}"
vars:
workspace_path: "{{ lookup('env', 'MOLECULE_SCENARIO_DIRECTORY') }}"
stack_name: "{{ lookup('env', 'ATMOSPHERE_STACK_NAME') | default('atmosphere', True) }}"
tasks:
- os_stack:
name: "{{ stack_name }}"
state: absent
- file:
path: "{{ molecule_instance_config }}"
state: absent
- name: Capture var files to delete
find:
paths:
- "{{ workspace_path }}/group_vars"
- "{{ workspace_path }}/host_vars"
file_type: file
recurse: true
excludes:
- "molecule.yml"
register: _var_files
- name: Delete var files
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ _var_files['files'] }}"

View File

@ -1,12 +0,0 @@
cert_manager_issuer:
ca:
secretName: root-secret
openstack_helm_glance_images:
- name: cirros
source_url: http://download.cirros-cloud.net/0.5.1/
image_file: cirros-0.5.1-x86_64-disk.img
min_disk: 1
disk_format: qcow2
container_format: bare
is_public: true

View File

@ -1,168 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
heat_template_version: 2016-10-14
parameters:
name:
type: string
index:
type: number
image:
type: string
default: Ubuntu 20.04.3 LTS (x86_64) [2021-10-04]
constraints:
- custom_constraint: glance.image
instance_type:
type: string
default: v3-standard-4
constraints:
- custom_constraint: nova.flavor
internal_network:
type: string
constraints:
- custom_constraint: neutron.network
key_name:
type: string
constraints:
- custom_constraint: nova.keypair
public_network:
type: string
default: public
constraints:
- custom_constraint: neutron.network
external_network:
type: string
constraints:
- custom_constraint: neutron.network
extra_volumes_count:
type: number
default: 0
extra_volumes_size:
type: number
default: 0
boot_volumes_size:
type: number
default: 40
boot_from_volume:
type: boolean
default: false
conditions:
has_extra_volumes:
not:
equals:
- get_param: extra_volumes_count
- 0
is_boot_from_image:
equals:
- get_param: boot_from_volume
- false
is_boot_from_volume:
equals:
- get_param: boot_from_volume
- true
resources:
internal_port:
type: OS::Neutron::Port
properties:
network: { get_param: internal_network }
port_security_enabled: false
floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_network }
port_id: { get_resource: internal_port }
external_port:
type: OS::Neutron::Port
properties:
network: { get_param: external_network }
port_security_enabled: false
server_boot_from_image:
type: OS::Nova::Server
condition: is_boot_from_image
properties:
name:
yaql:
expression: concat($.data.name, str($.data.index + 1))
data:
name: { get_param: name }
index: { get_param: index }
image: { get_param: image }
flavor: { get_param: instance_type }
key_name: { get_param: key_name }
config_drive: true
networks:
- port: { get_resource: internal_port }
- port: { get_resource: external_port }
server_boot_from_volume:
type: OS::Nova::Server
condition: is_boot_from_volume
properties:
name:
yaql:
expression: concat($.data.name, str($.data.index + 1))
data:
name: { get_param: name }
index: { get_param: index }
flavor: { get_param: instance_type }
key_name: { get_param: key_name }
config_drive: true
networks:
- port: { get_resource: internal_port }
- port: { get_resource: external_port }
block_device_mapping_v2:
- boot_index: 0
volume_id: {get_resource: volume}
delete_on_termination: true
volume:
type: OS::Cinder::Volume
condition: is_boot_from_volume
properties:
size: { get_param: boot_volumes_size }
image: { get_param: image }
volumes:
type: OS::Heat::ResourceGroup
condition: has_extra_volumes
properties:
count: { get_param: extra_volumes_count }
resource_def:
type: volume.yaml
properties:
instance_uuid: {if: ["is_boot_from_volume", { get_resource: server_boot_from_volume }, { get_resource: server_boot_from_image } ]}
volume_size: { get_param: extra_volumes_size }
outputs:
floating_ip_address:
value: { get_attr: [floating_ip, floating_ip_address] }

View File

@ -1,183 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
heat_template_version: 2016-10-14
parameters:
internal_cidr:
type: string
default: 10.96.240.0/24
constraints:
- custom_constraint: net_cidr
nameservers:
type: comma_delimited_list
external_cidr:
type: string
default: 10.96.250.0/24
constraints:
- custom_constraint: net_cidr
public_network:
type: string
constraints:
- custom_constraint: neutron.network
image:
type: string
constraints:
- custom_constraint: glance.image
boot_from_volume:
type: boolean
default: false
instance_type:
type: string
constraints:
- custom_constraint: nova.flavor
resources:
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: public_network }
internal_network:
type: OS::Neutron::Net
internal_subnet:
type: OS::Neutron::Subnet
properties:
network: { get_resource: internal_network }
cidr: { get_param: internal_cidr }
dns_nameservers: { get_param: nameservers }
internal_network_router_interface:
type: OS::Neutron::RouterInterface
properties:
router: { get_resource: router }
subnet: { get_resource: internal_subnet }
internal_network_vip:
type: OS::Neutron::Port
properties:
network: { get_resource: internal_network }
internal_network_vip_floating_ip:
type: OS::Neutron::FloatingIP
depends_on:
- internal_network_router_interface
properties:
floating_network: { get_param: public_network }
port_id: { get_resource: internal_network_vip }
external_network:
type: OS::Neutron::Net
external_subnet:
type: OS::Neutron::Subnet
properties:
network: { get_resource: external_network }
cidr: { get_param: external_cidr }
dns_nameservers: { get_param: nameservers }
gateway_ip: null
allocation_pools:
- start: 10.96.250.100
end: 10.96.250.150
external_network_vip:
type: OS::Neutron::Port
properties:
network: { get_resource: external_network }
key_pair:
type: OS::Nova::KeyPair
properties:
name: { get_param: OS::stack_id }
save_private_key: true
controller:
type: OS::Heat::ResourceGroup
depends_on:
- internal_network_router_interface
properties:
count: 3
resource_def:
type: server.yaml
properties:
name: ctl
index: "%index%"
image: { get_param: image }
instance_type: { get_param: instance_type }
key_name: { get_resource: key_pair }
internal_network: { get_resource: internal_network }
public_network: { get_param: public_network }
external_network: { get_resource: external_network }
boot_volumes_size: 40
boot_from_volume: { get_param: boot_from_volume }
storage:
type: OS::Heat::ResourceGroup
depends_on:
- internal_network_router_interface
properties:
count: 3
resource_def:
type: server.yaml
properties:
name: nvme
index: "%index%"
image: { get_param: image }
instance_type: { get_param: instance_type }
key_name: { get_resource: key_pair }
internal_network: { get_resource: internal_network }
public_network: { get_param: public_network }
external_network: { get_resource: external_network }
extra_volumes_count: 3
extra_volumes_size: 40
boot_volumes_size: 40
boot_from_volume: { get_param: boot_from_volume }
compute:
type: OS::Heat::ResourceGroup
depends_on:
- internal_network_router_interface
properties:
count: 2
resource_def:
type: server.yaml
properties:
name: kvm
index: "%index%"
image: { get_param: image }
instance_type: { get_param: instance_type }
key_name: { get_resource: key_pair }
internal_network: { get_resource: internal_network }
public_network: { get_param: public_network }
external_network: { get_resource: external_network }
boot_volumes_size: 40
boot_from_volume: { get_param: boot_from_volume }
outputs:
controller_floating_ip_addresses:
value: { get_attr: [controller, floating_ip_address] }
storage_floating_ip_addresses:
value: { get_attr: [storage, floating_ip_address] }
compute_floating_ip_addresses:
value: { get_attr: [compute, floating_ip_address] }
key_pair:
value: { get_attr: [key_pair, private_key] }

View File

@ -1,34 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
heat_template_version: 2016-10-14
parameters:
instance_uuid:
type: string
volume_size:
type: number
resources:
volume:
type: OS::Cinder::Volume
properties:
size: { get_param: volume_size }
volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_param: instance_uuid }
volume_id: { get_resource: volume }

View File

@ -1,51 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
dependency:
name: galaxy
driver:
name: delegated
platforms:
- name: ctl1
groups: &controller_groups
- controllers
- name: ctl2
groups: *controller_groups
- name: ctl3
groups: *controller_groups
- name: nvme1
groups: &nvme_groups
- cephs
- name: nvme2
groups: *nvme_groups
- name: nvme3
groups: *nvme_groups
- name: kvm1
groups: &kvm_groups
- computes
- name: kvm2
groups: *kvm_groups
provisioner:
name: ansible
options:
inventory: "${MOLECULE_EPHEMERAL_DIRECTORY}/workspace"
config_options:
ssh_connection:
pipelining: true
inventory:
links:
host_vars: "${MOLECULE_SCENARIO_DIRECTORY}/host_vars"
group_vars: "${MOLECULE_SCENARIO_DIRECTORY}/group_vars"
verifier:
name: ansible

View File

@ -1,21 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- hosts: all
tasks:
# The apt module can not be used for this since it installs python-apt
# which can not work until this command fixes the cache.
- name: Update apt cache
become: yes
command: apt-get update

View File

@ -1,3 +0,0 @@
molecule==3.5.2 # https://github.com/ansible-community/molecule/issues/3435
openstacksdk==0.61.0
netaddr

View File

@ -1,15 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- import_playbook: vexxhost.atmosphere.tempest

View File

@ -1,36 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- name: Setup Ceph repository
hosts: controllers:cephs
become: true
roles:
- role: ceph_repository
when: atmosphere_ceph_enabled | default(true)
- name: Deploy Ceph monitors & managers
hosts: controllers
become: true
roles:
- role: ceph_mon
when: atmosphere_ceph_enabled | default(true)
- role: ceph_mgr
when: atmosphere_ceph_enabled | default(true)
- name: Deploy Ceph OSDs
hosts: cephs
become: true
roles:
- role: ceph_osd
when: atmosphere_ceph_enabled | default(true)

View File

@ -1,35 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- name: Clean-up legacy RabbitMQ cluster
hosts: controllers[0]
become: true
gather_facts: false
tasks:
- name: Delete the Helm release
kubernetes.core.helm:
name: rabbitmq
namespace: openstack
kubeconfig: /etc/kubernetes/admin.conf
state: absent
wait: true
- name: Delete the PVCs
kubernetes.core.k8s:
state: absent
api_version: v1
kind: PersistentVolumeClaim
namespace: openstack
name: "rabbitmq-data-rabbitmq-rabbitmq-{{ item }}"
loop: "{{ range(0, 3) | list }}"

View File

@ -1,431 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- name: Generate workspace for Atmosphere
hosts: localhost
gather_facts: false
tasks:
- name: Create folders for workspace
ansible.builtin.file:
path: "{{ workspace_path }}/{{ item }}"
state: directory
loop:
- group_vars
- group_vars/all
- group_vars/controllers
- group_vars/cephs
- group_vars/computes
- host_vars
- name: Generate Ceph control plane configuration for workspace
hosts: localhost
gather_facts: false
vars:
_ceph_path: "{{ workspace_path }}/group_vars/all/ceph.yml"
# Input variables
ceph_fsid: "{{ lookup('password', '/dev/null chars=ascii_letters,digits') | to_uuid }}"
ceph_public_network: 10.96.240.0/24
tasks:
- name: Ensure the Ceph control plane configuration file exists
ansible.builtin.file:
path: "{{ _ceph_path }}"
state: touch
- name: Load the current Ceph control plane configuration into a variable
ansible.builtin.include_vars:
file: "{{ _ceph_path }}"
name: ceph
- name: Generate Ceph control plane values for missing variables
ansible.builtin.set_fact:
ceph: "{{ ceph | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Ceph configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in ceph
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_dict:
ceph_mon_fsid: "{{ ceph_fsid }}"
ceph_mon_public_network: "{{ ceph_public_network }}"
- name: Write new Ceph control plane configuration file to disk
ansible.builtin.copy:
content: "{{ ceph | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _ceph_path }}"
- name: Generate Ceph OSD configuration for workspace
hosts: localhost
gather_facts: false
vars:
_ceph_osd_path: "{{ workspace_path }}/group_vars/cephs/osds.yml"
tasks:
- name: Ensure the Ceph OSDs configuration file exists
ansible.builtin.file:
path: "{{ _ceph_osd_path }}"
state: touch
- name: Load the current Ceph OSDs configuration into a variable
ansible.builtin.include_vars:
file: "{{ _ceph_osd_path }}"
name: ceph_osd
- name: Generate Ceph OSDs values for missing variables
ansible.builtin.set_fact:
ceph_osd: "{{ ceph_osd | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Ceph configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in ceph_osd
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_dict:
ceph_osd_devices:
- /dev/vdb
- /dev/vdc
- /dev/vdd
- name: Write new Ceph OSDs configuration file to disk
ansible.builtin.copy:
content: "{{ ceph_osd | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _ceph_osd_path }}"
- name: Generate Kubernetes configuration for workspace
hosts: localhost
gather_facts: false
vars:
_kubernetes_path: "{{ workspace_path }}/group_vars/all/kubernetes.yml"
tasks:
- name: Ensure the Kubernetes configuration file exists
ansible.builtin.file:
path: "{{ _kubernetes_path }}"
state: touch
- name: Load the current Kubernetes configuration into a variable
ansible.builtin.include_vars:
file: "{{ _kubernetes_path }}"
name: kubernetes
- name: Generate Kubernetes values for missing variables
ansible.builtin.set_fact:
kubernetes: "{{ kubernetes | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Ceph configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in kubernetes
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_dict:
kubernetes_hostname: 10.96.240.10
kubernetes_keepalived_vrid: 42
kubernetes_keepalived_interface: ens3
kubernetes_keepalived_vip: 10.96.240.10
- name: Write new Kubernetes configuration file to disk
ansible.builtin.copy:
content: "{{ kubernetes | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _kubernetes_path }}"
- name: Generate Keepalived configuration for workspace
hosts: localhost
gather_facts: false
vars:
_keepalived_path: "{{ workspace_path }}/group_vars/all/keepalived.yml"
tasks:
- name: Ensure the Keeaplived configuration file exists
ansible.builtin.file:
path: "{{ _keepalived_path }}"
state: touch
- name: Load the current Keepalived configuration into a variable
ansible.builtin.include_vars:
file: "{{ _keepalived_path }}"
name: keepalived
- name: Generate Keepalived values for missing variables
ansible.builtin.set_fact:
keepalived: "{{ keepalived | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Keepalived configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in keepalived
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_dict:
keepalived_interface: br-ex
keepalived_vip: 10.96.250.10
- name: Write new Keepalived configuration file to disk
ansible.builtin.copy:
content: "{{ keepalived | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _keepalived_path }}"
- name: Generate endpoints for workspace
hosts: localhost
gather_facts: false
vars:
_endpoints_path: "{{ workspace_path }}/group_vars/all/endpoints.yml"
# Input variables
region_name: RegionOne
domain_name: vexxhost.cloud
tasks:
- name: Ensure the endpoints file exists
ansible.builtin.file:
path: "{{ _endpoints_path }}"
state: touch
- name: Load the current endpoints into a variable
ansible.builtin.include_vars:
file: "{{ _endpoints_path }}"
name: endpoints
- name: Generate endpoint skeleton for missing variables
ansible.builtin.set_fact:
endpoints: |
{{
endpoints |
default({}) |
combine({item: default_map[item]})
}}
# NOTE(mnaser): We don't want to override existing endpoints, so we generate
# a stub one if and only if it doesn't exist
when: item not in endpoints
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_lines: >
ls {{ playbook_dir }}/../roles/*/defaults/main.yml |
xargs grep undef |
egrep '(_host|region_name)' |
cut -d':' -f2
# NOTE(mnaser): We use these variables to generate map of service name to
# service type in order to generate the URLs
vars:
default_map:
openstack_helm_endpoints_region_name: "{{ region_name }}"
openstack_helm_endpoints_barbican_api_host: "key-manager.{{ domain_name }}"
openstack_helm_endpoints_cinder_api_host: "volume.{{ domain_name }}"
openstack_helm_endpoints_designate_api_host: "dns.{{ domain_name }}"
openstack_helm_endpoints_glance_api_host: "image.{{ domain_name }}"
openstack_helm_endpoints_heat_api_host: "orchestration.{{ domain_name }}"
openstack_helm_endpoints_heat_cfn_api_host: "cloudformation.{{ domain_name }}"
openstack_helm_endpoints_horizon_api_host: "dashboard.{{ domain_name }}"
openstack_helm_endpoints_ironic_api_host: "baremetal.{{ domain_name }}"
openstack_helm_endpoints_keystone_api_host: "identity.{{ domain_name }}"
openstack_helm_endpoints_neutron_api_host: "network.{{ domain_name }}"
openstack_helm_endpoints_nova_api_host: "compute.{{ domain_name }}"
openstack_helm_endpoints_nova_novnc_host: "vnc.{{ domain_name }}"
openstack_helm_endpoints_octavia_api_host: "load-balancer.{{ domain_name }}"
openstack_helm_endpoints_placement_api_host: "placement.{{ domain_name }}"
openstack_helm_endpoints_senlin_api_host: "clustering.{{ domain_name }}"
- name: Write new endpoints file to disk
ansible.builtin.copy:
content: "{{ endpoints | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _endpoints_path }}"
- name: Ensure the endpoints file exists
ansible.builtin.file:
path: "{{ _endpoints_path }}"
state: touch
- name: Generate Neutron configuration for workspace
hosts: localhost
gather_facts: false
vars:
_neutron_path: "{{ workspace_path }}/group_vars/all/neutron.yml"
# Input variables
tasks:
- name: Ensure the Neutron configuration file exists
ansible.builtin.file:
path: "{{ _neutron_path }}"
state: touch
- name: Load the current Neutron configuration into a variable
ansible.builtin.include_vars:
file: "{{ _neutron_path }}"
name: neutron
- name: Generate Neutron values for missing variables
ansible.builtin.set_fact:
neutron: "{{ neutron | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Ceph configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in neutron
with_dict:
openstack_helm_neutron_values:
conf:
auto_bridge_add:
br-ex: ens4
openstack_helm_neutron_networks:
- name: public
external: true
shared: true
mtu_size: 1500
port_security_enabled: true
provider_network_type: flat
provider_physical_network: external
subnets:
- name: public-subnet
cidr: 10.96.250.0/24
gateway_ip: 10.96.250.10
allocation_pool_start: 10.96.250.200
allocation_pool_end: 10.96.250.220
enable_dhcp: true
- name: Write new Neutron configuration file to disk
ansible.builtin.copy:
content: "{{ neutron | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _neutron_path }}"
- name: Generate Nova configuration for workspace
hosts: localhost
gather_facts: false
vars:
_nova_path: "{{ workspace_path }}/group_vars/all/nova.yml"
# Input variables
tasks:
- name: Ensure the Nova configuration file exists
ansible.builtin.file:
path: "{{ _nova_path }}"
state: touch
- name: Load the current Nova configuration into a variable
ansible.builtin.include_vars:
file: "{{ _nova_path }}"
name: nova
- name: Generate Nova values for missing variables
ansible.builtin.set_fact:
nova: "{{ nova | default({}) | combine({item.key: item.value}) }}"
# NOTE(mnaser): We don't want to override existing Nova configurations,
# so we generate a stub one if and only if it doesn't exist
when: item.key not in nova
with_dict:
openstack_helm_nova_flavors:
- name: m1.tiny
ram: 512
disk: 1
vcpus: 1
- name: m1.small
ram: 2048
disk: 20
vcpus: 1
- name: "m1.medium"
ram: 4096
disk: 40
vcpus: 2
- name: "m1.large"
ram: 8192
disk: 80
vcpus: 4
- name: "m1.xlarge"
ram: 16384
disk: 160
vcpus: 8
- name: Write new Nova configuration file to disk
ansible.builtin.copy:
content: "{{ nova | to_nice_yaml(indent=2, width=180) }}"
dest: "{{ _nova_path }}"
- name: Generate secrets for workspace
hosts: localhost
gather_facts: false
vars:
secrets_path: "{{ workspace_path }}/group_vars/all/secrets.yml"
tasks:
- name: Ensure the secrets file exists
ansible.builtin.file:
path: "{{ secrets_path }}"
state: touch
- name: Load the current secrets into a variable
ansible.builtin.include_vars:
file: "{{ secrets_path }}"
name: secrets
- name: Generate secrets for missing variables
ansible.builtin.set_fact:
secrets: "{{ secrets | default({}) | combine({item: lookup('password', '/dev/null chars=ascii_lowercase,ascii_uppercase,digits length=32')}) }}"
# NOTE(mnaser): We don't want to override existing secrets, so we generate
# a new one if and only if it doesn't exist
when: item not in secrets
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_lines: >
ls {{ playbook_dir }}/../roles/*/defaults/main.yml |
xargs grep undef |
egrep -v '(_host|region_name|_ssh_key|_vip|_interface|_kek)' |
cut -d':' -f2
- name: Generate base64 encoded secrets
ansible.builtin.set_fact:
secrets: "{{ secrets | default({}) | combine({item: lookup('password', '/dev/null chars=ascii_lowercase,ascii_uppercase,digits length=32') | b64encode}) }}"
# NOTE(mnaser): We don't want to override existing secrets, so we generate
# a new one if and only if it doesn't exist
when: item not in secrets
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_lines: >
ls {{ playbook_dir }}/../roles/*/defaults/main.yml |
xargs grep undef |
egrep '(_kek)' |
cut -d':' -f2
- name: Generate temporary files for generating keys for missing variables
ansible.builtin.tempfile:
state: file
prefix: "{{ item }}"
register: _ssh_key_file
# NOTE(mnaser): We don't want to override existing secrets, so we generate
# a new one if and only if it doesn't exist
when: item not in secrets
# NOTE(mnaser): This is absolutely hideous but there's no clean way of
# doing this using `with_fileglob` or `with_filetree`
with_lines: >
ls {{ playbook_dir }}/../roles/*/defaults/main.yml |
xargs grep undef |
egrep '(_ssh_key)' |
cut -d':' -f2
- name: Generate SSH keys for missing variables
community.crypto.openssh_keypair:
path: "{{ item.path }}"
regenerate: full_idempotence
register: _openssh_keypair
loop: "{{ _ssh_key_file.results }}"
loop_control:
label: "{{ item.item }}"
- name: Set values for SSH keys
ansible.builtin.set_fact:
secrets: "{{ secrets | default({}) | combine({item.item: lookup('file', item.path)}) }}"
loop: "{{ _ssh_key_file.results }}"
loop_control:
label: "{{ item.item }}"
- name: Delete the temporary files generated for SSH keys
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ _ssh_key_file.results }}"
loop_control:
label: "{{ item.item }}"
- name: Write new secrets file to disk
ansible.builtin.copy:
content: "{{ secrets | to_nice_yaml }}"
dest: "{{ secrets_path }}"
- name: Encrypt secrets file with Vault password
ansible.builtin.shell:
ansible-vault encrypt --vault-password-file {{ secrets_vault_password_file }} {{ secrets_path }}
when:
- secrets_vault_password_file is defined

View File

@ -1,24 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- hosts: all
become: true
roles:
- role: containerd
- role: kubernetes
- hosts: controllers
become: true
roles:
- helm

View File

@ -1,149 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- hosts: controllers[0]
gather_facts: false
become: true
roles:
- role: cilium
tags:
- cilium
- hosts: controllers
gather_facts: false
become: true
roles:
- role: flux
tags:
- flux
- hosts: controllers[0]
gather_facts: false
become: true
roles:
- role: csi
tags:
- csi
- role: kube_prometheus_stack
tags:
- kube-prometheus-stack
- role: node_feature_discovery
tags:
- node-feature-discovery
- role: ipmi_exporter
tags:
- ipmi-exporter
- role: prometheus_pushgateway
tags:
- prometheus-pushgateway
- role: openstack_namespace
tags:
- openstack-namespace
- role: ingress_nginx
tags:
- ingress-nginx
- role: cert_manager
tags:
- cert-manager
- role: keepalived
tags:
- keepalived
- role: percona_xtradb_cluster
tags:
- percona-xtradb-cluster
- role: openstack_helm_infra_memcached
tags:
- openstack-helm-infra-memcached
- role: rabbitmq_operator
tags:
- rabbitmq-operator
- role: openstack_helm_keystone
tags:
- openstack-helm-keystone
- role: openstack_helm_barbican
tags:
- openstack-helm-barbican
- role: openstack_helm_infra_ceph_provisioners
when: atmosphere_ceph_enabled | default(true)
tags:
- openstack-helm-infra-ceph-provisioners
- role: openstack_helm_glance
tags:
- openstack-helm-glance
- role: openstack_helm_cinder
tags:
- openstack-helm-cinder
- role: openstack_helm_placement
tags:
- openstack-helm-placement
- role: openstack_helm_infra_openvswitch
tags:
- openstack-helm-infra-openvswitch
- role: openstack_helm_infra_libvirt
tags:
- openstack-helm-infra-libvirt
- role: coredns
tags:
- coredns
- role: openstack_helm_neutron
tags:
- openstack-helm-neutron
- role: openstack_helm_nova
tags:
- openstack-helm-nova
- role: openstack_helm_senlin
tags:
- openstack-helm-senlin
- role: openstack_helm_heat
tags:
- openstack-helm-heat
- role: openstack_helm_horizon
tags:
- openstack-helm-horizon
- role: openstack_exporter
tags:
- openstack-exporter
- hosts: controllers
gather_facts: false
roles:
- role: openstack_cli
tags:
- openstack-cli

View File

@ -1,18 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- import_playbook: vexxhost.atmosphere.ceph
- import_playbook: vexxhost.atmosphere.kubernetes
- import_playbook: vexxhost.atmosphere.openstack
- import_playbook: vexxhost.atmosphere.cleanup

View File

@ -1,21 +0,0 @@
# Copyright (c) 2022 VEXXHOST, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
- hosts: controllers[0]
gather_facts: false
become: true
roles:
- role: openstack_helm_tempest
tags:
- openstack-helm-tempest

View File

@ -1,114 +0,0 @@
import os
import datetime
def generate_ceph_cmd(sub_cmd, args, user_key=None, cluster='ceph', user='client.admin', container_image=None, interactive=False):
'''
Generate 'ceph' command line to execute
'''
if not user_key:
user_key = '/etc/ceph/{}.{}.keyring'.format(cluster, user)
cmd = pre_generate_ceph_cmd(
container_image=container_image, interactive=interactive)
base_cmd = [
'-n',
user,
'-k',
user_key,
'--cluster',
cluster
]
base_cmd.extend(sub_cmd)
cmd.extend(base_cmd + args)
return cmd
def container_exec(binary, container_image, interactive=False):
'''
Build the docker CLI to run a command inside a container
'''
container_binary = os.getenv('CEPH_CONTAINER_BINARY')
command_exec = [container_binary, 'run']
if interactive:
command_exec.extend(['--interactive'])
command_exec.extend(['--rm',
'--net=host',
'-v', '/etc/ceph:/etc/ceph:z',
'-v', '/var/lib/ceph/:/var/lib/ceph/:z',
'-v', '/var/log/ceph/:/var/log/ceph/:z',
'--entrypoint=' + binary, container_image])
return command_exec
def is_containerized():
'''
Check if we are running on a containerized cluster
'''
if 'CEPH_CONTAINER_IMAGE' in os.environ:
container_image = os.getenv('CEPH_CONTAINER_IMAGE')
else:
container_image = None
return container_image
def pre_generate_ceph_cmd(container_image=None, interactive=False):
'''
Generate ceph prefix comaand
'''
if container_image:
cmd = container_exec('ceph', container_image, interactive=interactive)
else:
cmd = ['ceph']
return cmd
def exec_command(module, cmd, stdin=None):
'''
Execute command(s)
'''
binary_data = False
if stdin:
binary_data = True
rc, out, err = module.run_command(cmd, data=stdin, binary_data=binary_data)
return rc, cmd, out, err
def exit_module(module, out, rc, cmd, err, startd, changed=False, diff=dict(before="", after="")):
endd = datetime.datetime.now()
delta = endd - startd
result = dict(
cmd=cmd,
start=str(startd),
end=str(endd),
delta=str(delta),
rc=rc,
stdout=out.rstrip("\r\n"),
stderr=err.rstrip("\r\n"),
changed=changed,
diff=diff
)
module.exit_json(**result)
def fatal(message, module):
'''
Report a fatal error and exit
'''
if module:
module.fail_json(msg=message, rc=1)
else:
raise(Exception(message))

View File

@ -1,43 +0,0 @@
#!/usr/bin/python3
from ansible.module_utils.basic import AnsibleModule
def run_module():
module_args = dict(
who=dict(type='str', required=True),
name=dict(type='str', required=True),
value=dict(type='str', required=True),
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
who = module.params['who']
name = module.params['name']
value = module.params['value']
changed = False
_, out, _ = module.run_command(
['ceph', 'config', 'get', who, name], check_rc=True
)
if out.strip() != value:
changed = True
if not module.check_mode:
_, _, _ = module.run_command(
['ceph', 'config', 'set', who, name, value], check_rc=True
)
module.exit_json(changed=changed)
def main():
run_module()
if __name__ == '__main__':
main()

View File

@ -1,692 +0,0 @@
#!/usr/bin/python3
# Copyright 2018, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.vexxhost.atmosphere.plugins.module_utils.ca_common import generate_ceph_cmd, \
is_containerized, \
container_exec, \
fatal
import datetime
import json
import os
import struct
import time
import base64
import socket
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: ceph_key
author: Sebastien Han <seb@redhat.com>
short_description: Manage Cephx key(s)
version_added: "2.6"
description:
- Manage CephX creation, deletion and updates.
It can also list and get information about keyring(s).
options:
cluster:
description:
- The ceph cluster name.
required: false
default: ceph
name:
description:
- name of the CephX key
required: true
user:
description:
- entity used to perform operation.
It corresponds to the -n option (--name)
required: false
user_key:
description:
- the path to the keyring corresponding to the
user being used.
It corresponds to the -k option (--keyring)
state:
description:
- If 'present' is used, the module creates a keyring
with the associated capabilities.
If 'present' is used and a secret is provided the module
will always add the key. Which means it will update
the keyring if the secret changes, the same goes for
the capabilities.
If 'absent' is used, the module will simply delete the keyring.
If 'list' is used, the module will list all the keys and will
return a json output.
If 'info' is used, the module will return in a json format the
description of a given keyring.
If 'generate_secret' is used, the module will simply output a cephx keyring.
required: false
choices: ['present', 'update', 'absent', 'list', 'info', 'fetch_initial_keys', 'generate_secret']
default: present
caps:
description:
- CephX key capabilities
default: None
required: false
secret:
description:
- keyring's secret value
required: false
default: None
import_key:
description:
- Wether or not to import the created keyring into Ceph.
This can be useful for someone that only wants to generate keyrings
but not add them into Ceph.
required: false
default: True
dest:
description:
- Destination to write the keyring, can a file or a directory
required: false
default: /etc/ceph/
fetch_initial_keys:
description:
- Fetch client.admin and bootstrap key.
This is only needed for Nautilus and above.
Writes down to the filesystem the initial keys generated by the monitor. # noqa: E501
This command can ONLY run from a monitor node.
required: false
default: false
output_format:
description:
- The key output format when retrieving the information of an
entity.
required: false
default: json
'''
EXAMPLES = '''
keys_to_create:
- { name: client.key, key: "AQAin8tUUK84ExAA/QgBtI7gEMWdmnvKBzlXdQ==", caps: { mon: "allow rwx", mds: "allow *" } , mode: "0600" } # noqa: E501
- { name: client.cle, caps: { mon: "allow r", osd: "allow *" } , mode: "0600" } # noqa: E501
caps:
mon: "allow rwx"
mds: "allow *"
- name: create ceph admin key
ceph_key:
name: client.admin
state: present
secret: AQAin8tU2DsKFBAAFIAzVTzkL3+gtAjjpQiomw==
caps:
mon: allow *
osd: allow *
mgr: allow *
mds: allow
mode: 0400
import_key: False
- name: create monitor initial keyring
ceph_key:
name: mon.
state: present
secret: AQAin8tUMICVFBAALRHNrV0Z4MXupRw4v9JQ6Q==
caps:
mon: allow *
dest: "/var/lib/ceph/tmp/"
import_key: False
- name: create cephx key
ceph_key:
name: "{{ keys_to_create }}"
user: client.bootstrap-rgw
user_key: /var/lib/ceph/bootstrap-rgw/ceph.keyring
state: present
caps: "{{ caps }}"
- name: create cephx key but don't import it in Ceph
ceph_key:
name: "{{ keys_to_create }}"
state: present
caps: "{{ caps }}"
import_key: False
- name: delete cephx key
ceph_key:
name: "my_key"
state: absent
- name: info cephx key
ceph_key:
name: "my_key""
state: info
- name: info cephx admin key (plain)
ceph_key:
name: client.admin
output_format: plain
state: info
register: client_admin_key
- name: list cephx keys
ceph_key:
state: list
- name: fetch cephx keys
ceph_key:
state: fetch_initial_keys
'''
RETURN = '''# '''
CEPH_INITIAL_KEYS = ['client.admin', 'client.bootstrap-mds', 'client.bootstrap-mgr', # noqa: E501
'client.bootstrap-osd', 'client.bootstrap-rbd', 'client.bootstrap-rbd-mirror', 'client.bootstrap-rgw'] # noqa: E501
def str_to_bool(val):
try:
val = val.lower()
except AttributeError:
val = str(val).lower()
if val == 'true':
return True
elif val == 'false':
return False
else:
raise ValueError("Invalid input value: %s" % val)
def generate_secret():
'''
Generate a CephX secret
'''
key = os.urandom(16)
header = struct.pack('<hiih', 1, int(time.time()), 0, len(key))
secret = base64.b64encode(header + key)
return secret
def generate_caps(_type, caps):
'''
Generate CephX capabilities list
'''
caps_cli = []
for k, v in caps.items():
# makes sure someone didn't pass an empty var,
# we don't want to add an empty cap
if len(k) == 0:
continue
if _type == "ceph-authtool":
caps_cli.extend(["--cap"])
caps_cli.extend([k, v])
return caps_cli
def generate_ceph_authtool_cmd(cluster, name, secret, caps, dest, container_image=None): # noqa: E501
'''
Generate 'ceph-authtool' command line to execute
'''
if container_image:
binary = 'ceph-authtool'
cmd = container_exec(
binary, container_image)
else:
binary = ['ceph-authtool']
cmd = binary
base_cmd = [
'--create-keyring',
dest,
'--name',
name,
'--add-key',
secret,
]
cmd.extend(base_cmd)
cmd.extend(generate_caps("ceph-authtool", caps))
return cmd
def create_key(module, result, cluster, user, user_key, name, secret, caps, import_key, dest, container_image=None): # noqa: E501
'''
Create a CephX key
'''
cmd_list = []
if not secret:
secret = generate_secret()
if user == 'client.admin':
args = ['import', '-i', dest]
else:
args = ['get-or-create', name]
args.extend(generate_caps(None, caps))
args.extend(['-o', dest])
cmd_list.append(generate_ceph_authtool_cmd(
cluster, name, secret, caps, dest, container_image))
if import_key or user != 'client.admin':
cmd_list.append(generate_ceph_cmd(sub_cmd=['auth'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image))
return cmd_list
def delete_key(cluster, user, user_key, name, container_image=None):
'''
Delete a CephX key
'''
cmd_list = []
args = [
'del',
name,
]
cmd_list.append(generate_ceph_cmd(sub_cmd=['auth'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image))
return cmd_list
def get_key(cluster, user, user_key, name, dest, container_image=None):
'''
Get a CephX key (write on the filesystem)
'''
cmd_list = []
args = [
'get',
name,
'-o',
dest,
]
cmd_list.append(generate_ceph_cmd(sub_cmd=['auth'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image))
return cmd_list
def info_key(cluster, name, user, user_key, output_format, container_image=None): # noqa: E501
'''
Get information about a CephX key
'''
cmd_list = []
args = [
'get',
name,
'-f',
output_format,
]
cmd_list.append(generate_ceph_cmd(sub_cmd=['auth'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image))
return cmd_list
def list_keys(cluster, user, user_key, container_image=None):
'''
List all CephX keys
'''
cmd_list = []
args = [
'ls',
'-f',
'json',
]
cmd_list.append(generate_ceph_cmd(sub_cmd=['auth'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image))
return cmd_list
def exec_commands(module, cmd_list):
'''
Execute command(s)
'''
for cmd in cmd_list:
rc, out, err = module.run_command(cmd)
if rc != 0:
return rc, cmd, out, err
return rc, cmd, out, err
def lookup_ceph_initial_entities(module, out):
'''
Lookup Ceph initial keys entries in the auth map
'''
# convert out to json, ansible returns a string...
try:
out_dict = json.loads(out)
except ValueError as e:
fatal("Could not decode 'ceph auth list' json output: {}".format(e), module) # noqa: E501
entities = []
if "auth_dump" in out_dict:
for key in out_dict["auth_dump"]:
for k, v in key.items():
if k == "entity":
if v in CEPH_INITIAL_KEYS:
entities.append(v)
else:
fatal("'auth_dump' key not present in json output:", module) # noqa: E501
if len(entities) != len(CEPH_INITIAL_KEYS) and not str_to_bool(os.environ.get('CEPH_ROLLING_UPDATE', False)): # noqa: E501
# must be missing in auth_dump, as if it were in CEPH_INITIAL_KEYS
# it'd be in entities from the above test. Report what's missing.
missing = []
for e in CEPH_INITIAL_KEYS:
if e not in entities:
missing.append(e)
fatal("initial keyring does not contain keys: " + ' '.join(missing), module) # noqa: E501
return entities
def build_key_path(cluster, entity):
'''
Build key path depending on the key type
'''
if "admin" in entity:
path = "/etc/ceph"
keyring_filename = cluster + "." + entity + ".keyring"
key_path = os.path.join(path, keyring_filename)
elif "bootstrap" in entity:
path = "/var/lib/ceph"
# bootstrap keys show up as 'client.boostrap-osd'
# however the directory is called '/var/lib/ceph/bootstrap-osd'
# so we need to substring 'client.'
entity_split = entity.split('.')[1]
keyring_filename = cluster + ".keyring"
key_path = os.path.join(path, entity_split, keyring_filename)
else:
return None
return key_path
def run_module():
module_args = dict(
cluster=dict(type='str', required=False, default='ceph'),
name=dict(type='str', required=False),
state=dict(type='str', required=False, default='present', choices=['present', 'update', 'absent', # noqa: E501
'list', 'info', 'fetch_initial_keys', 'generate_secret']), # noqa: E501
caps=dict(type='dict', required=False, default=None),
secret=dict(type='str', required=False, default=None, no_log=True),
import_key=dict(type='bool', required=False, default=True),
dest=dict(type='str', required=False, default='/etc/ceph/'),
user=dict(type='str', required=False, default='client.admin'),
user_key=dict(type='str', required=False, default=None),
output_format=dict(type='str', required=False, default='json', choices=['json', 'plain', 'xml', 'yaml']) # noqa: E501
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True,
add_file_common_args=True,
)
file_args = module.load_file_common_arguments(module.params)
# Gather module parameters in variables
state = module.params['state']
name = module.params.get('name')
cluster = module.params.get('cluster')
caps = module.params.get('caps')
secret = module.params.get('secret')
import_key = module.params.get('import_key')
dest = module.params.get('dest')
user = module.params.get('user')
user_key = module.params.get('user_key')
output_format = module.params.get('output_format')
changed = False
result = dict(
changed=changed,
stdout='',
stderr='',
rc=0,
start='',
end='',
delta='',
)
if module.check_mode and state != "info":
module.exit_json(**result)
startd = datetime.datetime.now()
# will return either the image name or None
container_image = is_containerized()
# Test if the key exists, if it does we skip its creation
# We only want to run this check when a key needs to be added
# There is no guarantee that any cluster is running and we don't need one
_secret = secret
_caps = caps
key_exist = 1
if not user_key:
user_key_filename = '{}.{}.keyring'.format(cluster, user)
user_key_dir = '/etc/ceph'
user_key_path = os.path.join(user_key_dir, user_key_filename)
else:
user_key_path = user_key
if (state in ["present", "update"]):
# if dest is not a directory, the user wants to change the file's name
# (e,g: /etc/ceph/ceph.mgr.ceph-mon2.keyring)
if not os.path.isdir(dest):
file_path = dest
else:
if 'bootstrap' in dest:
# Build a different path for bootstrap keys as there are stored
# as /var/lib/ceph/bootstrap-rbd/ceph.keyring
keyring_filename = cluster + '.keyring'
else:
keyring_filename = cluster + "." + name + ".keyring"
file_path = os.path.join(dest, keyring_filename)
file_args['path'] = file_path
if import_key:
_info_key = []
rc, cmd, out, err = exec_commands(
module, info_key(cluster, name, user, user_key_path, output_format, container_image)) # noqa: E501
key_exist = rc
if not caps and key_exist != 0:
fatal("Capabilities must be provided when state is 'present'", module) # noqa: E501
if key_exist != 0 and secret is None and caps is None:
fatal("Keyring doesn't exist, you must provide 'secret' and 'caps'", module) # noqa: E501
if key_exist == 0:
_info_key = json.loads(out)
if not secret:
secret = _info_key[0]['key']
_secret = _info_key[0]['key']
if not caps:
caps = _info_key[0]['caps']
_caps = _info_key[0]['caps']
if secret == _secret and caps == _caps:
if not os.path.isfile(file_path):
rc, cmd, out, err = exec_commands(module, get_key(cluster, user, user_key_path, name, file_path, container_image)) # noqa: E501
result["rc"] = rc
if rc != 0:
result["stdout"] = "Couldn't fetch the key {0} at {1}.".format(name, file_path) # noqa: E501
module.exit_json(**result)
result["stdout"] = "fetched the key {0} at {1}.".format(name, file_path) # noqa: E501
result["stdout"] = "{0} already exists and doesn't need to be updated.".format(name) # noqa: E501
result["rc"] = 0
module.set_fs_attributes_if_different(file_args, False)
module.exit_json(**result)
else:
if os.path.isfile(file_path) and not secret or not caps:
result["stdout"] = "{0} already exists in {1} you must provide secret *and* caps when import_key is {2}".format(name, dest, import_key) # noqa: E501
result["rc"] = 0
module.exit_json(**result)
if (key_exist == 0 and (secret != _secret or caps != _caps)) or key_exist != 0: # noqa: E501
rc, cmd, out, err = exec_commands(module, create_key(
module, result, cluster, user, user_key_path, name, secret, caps, import_key, file_path, container_image)) # noqa: E501
if rc != 0:
result["stdout"] = "Couldn't create or update {0}".format(name)
result["stderr"] = err
module.exit_json(**result)
module.set_fs_attributes_if_different(file_args, False)
changed = True
elif state == "absent":
if key_exist == 0:
rc, cmd, out, err = exec_commands(
module, delete_key(cluster, user, user_key_path, name, container_image)) # noqa: E501
if rc == 0:
changed = True
else:
rc = 0
elif state == "info":
rc, cmd, out, err = exec_commands(
module, info_key(cluster, name, user, user_key_path, output_format, container_image)) # noqa: E501
elif state == "list":
rc, cmd, out, err = exec_commands(
module, list_keys(cluster, user, user_key_path, container_image))
elif state == "fetch_initial_keys":
hostname = socket.gethostname().split('.', 1)[0]
user = "mon."
keyring_filename = cluster + "-" + hostname + "/keyring"
user_key_path = os.path.join("/var/lib/ceph/mon/", keyring_filename)
rc, cmd, out, err = exec_commands(
module, list_keys(cluster, user, user_key_path, container_image))
if rc != 0:
result["stdout"] = "failed to retrieve ceph keys"
result["sdterr"] = err
result['rc'] = 0
module.exit_json(**result)
entities = lookup_ceph_initial_entities(module, out)
output_format = "plain"
for entity in entities:
key_path = build_key_path(cluster, entity)
if key_path is None:
fatal("Failed to build key path, no entity yet?", module)
elif os.path.isfile(key_path):
# if the key is already on the filesystem
# there is no need to fetch it again
continue
extra_args = [
'-o',
key_path,
]
info_cmd = info_key(cluster, entity, user,
user_key_path, output_format, container_image)
# we use info_cmd[0] because info_cmd is an array made of an array
info_cmd[0].extend(extra_args)
rc, cmd, out, err = exec_commands(
module, info_cmd) # noqa: E501
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = key_path
module.set_fs_attributes_if_different(file_args, False)
elif state == "generate_secret":
out = generate_secret().decode()
cmd = ''
rc = 0
err = ''
changed = True
endd = datetime.datetime.now()
delta = endd - startd
result = dict(
cmd=cmd,
start=str(startd),
end=str(endd),
delta=str(delta),
rc=rc,
stdout=out.rstrip("\r\n"),
stderr=err.rstrip("\r\n"),
changed=changed,
)
if rc != 0:
module.fail_json(msg='non-zero return code', **result)
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

View File

@ -1,684 +0,0 @@
#!/usr/bin/python3
# Copyright 2020, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.vexxhost.atmosphere.plugins.module_utils.ca_common import generate_ceph_cmd, \
pre_generate_ceph_cmd, \
is_containerized, \
exec_command, \
exit_module
import datetime
import json
import os
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: ceph_pool
author: Guillaume Abrioux <gabrioux@redhat.com>
short_description: Manage Ceph Pools
version_added: "2.8"
description:
- Manage Ceph pool(s) creation, deletion and updates.
options:
cluster:
description:
- The ceph cluster name.
required: false
default: ceph
name:
description:
- name of the Ceph pool
required: true
state:
description:
If 'present' is used, the module creates a pool if it doesn't exist
or update it if it already exists.
If 'absent' is used, the module will simply delete the pool.
If 'list' is used, the module will return all details about the
existing pools. (json formatted).
required: false
choices: ['present', 'absent', 'list']
default: present
size:
description:
- set the replica size of the pool.
required: false
default: 3
min_size:
description:
- set the min_size parameter of the pool.
required: false
default: default to `osd_pool_default_min_size` (ceph)
pg_num:
description:
- set the pg_num of the pool.
required: false
default: default to `osd_pool_default_pg_num` (ceph)
pgp_num:
description:
- set the pgp_num of the pool.
required: false
default: default to `osd_pool_default_pgp_num` (ceph)
pg_autoscale_mode:
description:
- set the pg autoscaler on the pool.
required: false
default: 'on'
target_size_ratio:
description:
- set the target_size_ratio on the pool
required: false
default: None
pool_type:
description:
- set the pool type, either 'replicated' or 'erasure'
required: false
default: 'replicated'
erasure_profile:
description:
- When pool_type = 'erasure', set the erasure profile of the pool
required: false
default: 'default'
rule_name:
description:
- Set the crush rule name assigned to the pool
required: false
default: 'replicated_rule' when pool_type is 'erasure' else None
expected_num_objects:
description:
- Set the expected_num_objects parameter of the pool.
required: false
default: '0'
application:
description:
- Set the pool application on the pool.
required: false
default: None
'''
EXAMPLES = '''
pools:
- { name: foo, size: 3, application: rbd, pool_type: 'replicated',
pg_autoscale_mode: 'on' }
- hosts: all
become: true
tasks:
- name: create a pool
ceph_pool:
name: "{{ item.name }}"
state: present
size: "{{ item.size }}"
application: "{{ item.application }}"
pool_type: "{{ item.pool_type }}"
pg_autoscale_mode: "{{ item.pg_autoscale_mode }}"
with_items: "{{ pools }}"
'''
RETURN = '''# '''
def check_pool_exist(cluster,
name,
user,
user_key,
output_format='json',
container_image=None):
'''
Check if a given pool exists
'''
args = ['stats', name, '-f', output_format]
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def generate_get_config_cmd(param,
cluster,
user,
user_key,
container_image=None):
_cmd = pre_generate_ceph_cmd(container_image=container_image)
args = [
'-n',
user,
'-k',
user_key,
'--cluster',
cluster,
'config',
'get',
'mon.*',
param
]
cmd = _cmd + args
return cmd
def get_application_pool(cluster,
name,
user,
user_key,
output_format='json',
container_image=None):
'''
Get application type enabled on a given pool
'''
args = ['application', 'get', name, '-f', output_format]
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def enable_application_pool(cluster,
name,
application,
user,
user_key,
container_image=None):
'''
Enable application on a given pool
'''
args = ['application', 'enable', name, application]
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def disable_application_pool(cluster,
name,
application,
user,
user_key,
container_image=None):
'''
Disable application on a given pool
'''
args = ['application', 'disable', name,
application, '--yes-i-really-mean-it']
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def get_pool_details(module,
cluster,
name,
user,
user_key,
output_format='json',
container_image=None):
'''
Get details about a given pool
'''
args = ['ls', 'detail', '-f', output_format]
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
rc, cmd, out, err = exec_command(module, cmd)
if rc == 0:
out = [p for p in json.loads(out.strip()) if p['pool_name'] == name][0]
_rc, _cmd, application_pool, _err = exec_command(module,
get_application_pool(cluster, # noqa: E501
name, # noqa: E501
user, # noqa: E501
user_key, # noqa: E501
container_image=container_image)) # noqa: E501
# This is a trick because "target_size_ratio" isn't present at the same
# level in the dict
# ie:
# {
# 'pg_num': 8,
# 'pgp_num': 8,
# 'pg_autoscale_mode': 'on',
# 'options': {
# 'target_size_ratio': 0.1
# }
# }
# If 'target_size_ratio' is present in 'options', we set it, this way we
# end up with a dict containing all needed keys at the same level.
if 'target_size_ratio' in out['options'].keys():
out['target_size_ratio'] = out['options']['target_size_ratio']
else:
out['target_size_ratio'] = None
application = list(json.loads(application_pool.strip()).keys())
if len(application) == 0:
out['application'] = ''
else:
out['application'] = application[0]
return rc, cmd, out, err
def compare_pool_config(user_pool_config, running_pool_details):
'''
Compare user input config pool details with current running pool details
'''
delta = {}
filter_keys = ['pg_num', 'pg_placement_num', 'size',
'pg_autoscale_mode', 'target_size_ratio']
for key in filter_keys:
if (str(running_pool_details[key]) != user_pool_config[key]['value'] and # noqa: E501
user_pool_config[key]['value']):
delta[key] = user_pool_config[key]
if (running_pool_details['application'] !=
user_pool_config['application']['value'] and
user_pool_config['application']['value']):
delta['application'] = {}
delta['application']['new_application'] = user_pool_config['application']['value'] # noqa: E501
# to be improved (for update_pools()...)
delta['application']['value'] = delta['application']['new_application']
delta['application']['old_application'] = running_pool_details['application'] # noqa: E501
return delta
def list_pools(cluster,
user,
user_key,
details,
output_format='json',
container_image=None):
'''
List existing pools
'''
args = ['ls']
if details:
args.append('detail')
args.extend(['-f', output_format])
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def create_pool(cluster,
name,
user,
user_key,
user_pool_config,
container_image=None):
'''
Create a new pool
'''
args = ['create', user_pool_config['pool_name']['value'],
user_pool_config['type']['value']]
if user_pool_config['pg_autoscale_mode']['value'] != 'on':
args.extend(['--pg_num',
user_pool_config['pg_num']['value'],
'--pgp_num',
user_pool_config['pgp_num']['value'] or
user_pool_config['pg_num']['value']])
elif user_pool_config['target_size_ratio']['value']:
args.extend(['--target_size_ratio',
user_pool_config['target_size_ratio']['value']])
if user_pool_config['type']['value'] == 'replicated':
args.extend([user_pool_config['crush_rule']['value'],
'--expected_num_objects',
user_pool_config['expected_num_objects']['value'],
'--autoscale-mode',
user_pool_config['pg_autoscale_mode']['value']])
if (user_pool_config['size']['value'] and
user_pool_config['type']['value'] == "replicated"):
args.extend(['--size', user_pool_config['size']['value']])
elif user_pool_config['type']['value'] == 'erasure':
args.extend([user_pool_config['erasure_profile']['value']])
if user_pool_config['crush_rule']['value']:
args.extend([user_pool_config['crush_rule']['value']])
args.extend(['--expected_num_objects',
user_pool_config['expected_num_objects']['value'],
'--autoscale-mode',
user_pool_config['pg_autoscale_mode']['value']])
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def remove_pool(cluster, name, user, user_key, container_image=None):
'''
Remove a pool
'''
args = ['rm', name, name, '--yes-i-really-really-mean-it']
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
return cmd
def update_pool(module, cluster, name,
user, user_key, delta, container_image=None):
'''
Update an existing pool
'''
report = ""
for key in delta.keys():
if key != 'application':
args = ['set',
name,
delta[key]['cli_set_opt'],
delta[key]['value']]
cmd = generate_ceph_cmd(sub_cmd=['osd', 'pool'],
args=args,
cluster=cluster,
user=user,
user_key=user_key,
container_image=container_image)
rc, cmd, out, err = exec_command(module, cmd)
if rc != 0:
return rc, cmd, out, err
else:
rc, cmd, out, err = exec_command(module, disable_application_pool(cluster, name, delta['application']['old_application'], user, user_key, container_image=container_image)) # noqa: E501
if rc != 0:
return rc, cmd, out, err
rc, cmd, out, err = exec_command(module, enable_application_pool(cluster, name, delta['application']['new_application'], user, user_key, container_image=container_image)) # noqa: E501
if rc != 0:
return rc, cmd, out, err
report = report + "\n" + "{} has been updated: {} is now {}".format(name, key, delta[key]['value']) # noqa: E501
out = report
return rc, cmd, out, err
def run_module():
module_args = dict(
cluster=dict(type='str', required=False, default='ceph'),
name=dict(type='str', required=True),
state=dict(type='str', required=False, default='present',
choices=['present', 'absent', 'list']),
details=dict(type='bool', required=False, default=False),
size=dict(type='str', required=False),
min_size=dict(type='str', required=False),
pg_num=dict(type='str', required=False),
pgp_num=dict(type='str', required=False),
pg_autoscale_mode=dict(type='str', required=False, default='on'),
target_size_ratio=dict(type='str', required=False, default=None),
pool_type=dict(type='str', required=False, default='replicated',
choices=['replicated', 'erasure', '1', '3']),
erasure_profile=dict(type='str', required=False, default='default'),
rule_name=dict(type='str', required=False, default=None),
expected_num_objects=dict(type='str', required=False, default="0"),
application=dict(type='str', required=False, default=None),
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# Gather module parameters in variables
cluster = module.params.get('cluster')
name = module.params.get('name')
state = module.params.get('state')
details = module.params.get('details')
size = module.params.get('size')
min_size = module.params.get('min_size')
pg_num = module.params.get('pg_num')
pgp_num = module.params.get('pgp_num')
pg_autoscale_mode = module.params.get('pg_autoscale_mode')
target_size_ratio = module.params.get('target_size_ratio')
application = module.params.get('application')
if (module.params.get('pg_autoscale_mode').lower() in
['true', 'on', 'yes']):
pg_autoscale_mode = 'on'
elif (module.params.get('pg_autoscale_mode').lower() in
['false', 'off', 'no']):
pg_autoscale_mode = 'off'
else:
pg_autoscale_mode = 'warn'
if module.params.get('pool_type') == '1':
pool_type = 'replicated'
elif module.params.get('pool_type') == '3':
pool_type = 'erasure'
else:
pool_type = module.params.get('pool_type')
if not module.params.get('rule_name'):
rule_name = 'replicated_rule' if pool_type == 'replicated' else None
else:
rule_name = module.params.get('rule_name')
erasure_profile = module.params.get('erasure_profile')
expected_num_objects = module.params.get('expected_num_objects')
user_pool_config = {
'pool_name': {'value': name},
'pg_num': {'value': pg_num, 'cli_set_opt': 'pg_num'},
'pgp_num': {'value': pgp_num, 'cli_set_opt': 'pgp_num'},
'pg_autoscale_mode': {'value': pg_autoscale_mode,
'cli_set_opt': 'pg_autoscale_mode'},
'target_size_ratio': {'value': target_size_ratio,
'cli_set_opt': 'target_size_ratio'},
'application': {'value': application},
'type': {'value': pool_type},
'erasure_profile': {'value': erasure_profile},
'crush_rule': {'value': rule_name, 'cli_set_opt': 'crush_rule'},
'expected_num_objects': {'value': expected_num_objects},
'size': {'value': size, 'cli_set_opt': 'size'},
'min_size': {'value': min_size}
}
if module.check_mode:
module.exit_json(
changed=False,
stdout='',
stderr='',
rc=0,
start='',
end='',
delta='',
)
startd = datetime.datetime.now()
changed = False
# will return either the image name or None
container_image = is_containerized()
user = "client.admin"
keyring_filename = cluster + '.' + user + '.keyring'
user_key = os.path.join("/etc/ceph/", keyring_filename)
if state == "present":
rc, cmd, out, err = exec_command(module,
check_pool_exist(cluster,
name,
user,
user_key,
container_image=container_image)) # noqa: E501
if rc == 0:
running_pool_details = get_pool_details(module,
cluster,
name,
user,
user_key,
container_image=container_image) # noqa: E501
user_pool_config['pg_placement_num'] = {'value': str(running_pool_details[2]['pg_placement_num']), 'cli_set_opt': 'pgp_num'} # noqa: E501
delta = compare_pool_config(user_pool_config,
running_pool_details[2])
if len(delta) > 0:
keys = list(delta.keys())
details = running_pool_details[2]
if details['erasure_code_profile'] and 'size' in keys:
del delta['size']
if details['pg_autoscale_mode'] == 'on':
delta.pop('pg_num', None)
delta.pop('pgp_num', None)
if len(delta) == 0:
out = "Skipping pool {}.\nUpdating either 'size' on an erasure-coded pool or 'pg_num'/'pgp_num' on a pg autoscaled pool is incompatible".format(name) # noqa: E501
else:
rc, cmd, out, err = update_pool(module,
cluster,
name,
user,
user_key,
delta,
container_image=container_image) # noqa: E501
if rc == 0:
changed = True
else:
out = "Pool {} already exists and there is nothing to update.".format(name) # noqa: E501
else:
rc, cmd, out, err = exec_command(module,
create_pool(cluster,
name,
user,
user_key,
user_pool_config=user_pool_config, # noqa: E501
container_image=container_image)) # noqa: E501
if user_pool_config['application']['value']:
rc, _, _, _ = exec_command(module,
enable_application_pool(cluster,
name,
user_pool_config['application']['value'], # noqa: E501
user,
user_key,
container_image=container_image)) # noqa: E501
if user_pool_config['min_size']['value']:
# not implemented yet
pass
changed = True
elif state == "list":
rc, cmd, out, err = exec_command(module,
list_pools(cluster,
name, user,
user_key,
details,
container_image=container_image)) # noqa: E501
if rc != 0:
out = "Couldn't list pool(s) present on the cluster"
elif state == "absent":
rc, cmd, out, err = exec_command(module,
check_pool_exist(cluster,
name, user,
user_key,
container_image=container_image)) # noqa: E501
if rc == 0:
rc, cmd, out, err = exec_command(module,
remove_pool(cluster,
name,
user,
user_key,
container_image=container_image)) # noqa: E501
changed = True
else:
rc = 0
out = "Skipped, since pool {} doesn't exist".format(name)
exit_module(module=module, out=out, rc=rc, cmd=cmd, err=err, startd=startd,
changed=changed)
def main():
run_module()
if __name__ == '__main__':
main()

View File

@ -1,5 +0,0 @@
---
fixes:
- AlertManager did not have any persistence which meant that any silences
would not last through a restart of the pod. This patch adds persistence
so that silences would last survive a restart of the pod.

View File

@ -1,3 +0,0 @@
---
features:
- Added ``ansible-lint`` to all of the playbooks and roles.

View File

@ -1,3 +0,0 @@
---
features:
- Added ``AvailabilityZoneFilter`` for the OpenStack Nova service.

View File

@ -1,3 +0,0 @@
---
fixes:
- add barbican role to deployment

View File

@ -1,3 +0,0 @@
---
features:
- Added ``ceph_config`` module to allow tweaking Ceph configuration via IaC.

View File

@ -1,4 +0,0 @@
---
features:
- Added commit message checks. Starting now, commits must include ``Sem-Ver``
tags in the commit message as well as a release note in the ``releasenotes``

View File

@ -1,5 +0,0 @@
---
features:
- Added native deployment of CoreDNS dedicated for forwarding and caching DNS
requests for the cloud. By default, it's enabled to use DNS over TLS using
both CloudFlare and Google DNS.

View File

@ -1,6 +0,0 @@
---
features:
- Added CoreDNS metrics for the Neutron service.
fixes:
- Fix issues around upgrading existing releases around waiting for deploys
for larger environments.

View File

@ -1,3 +0,0 @@
---
features:
- Added documentation to using DNS01 challenges for certificates.

View File

@ -1,3 +0,0 @@
---
features:
- Added mirroring for GitHub

View File

@ -1,3 +0,0 @@
---
features:
- Added ``ipmi-exporter`` with alertings.

View File

@ -1,3 +0,0 @@
---
fixes:
- Added wheels for master branches to allow for building Tempest images.

View File

@ -1,3 +0,0 @@
---
features:
- Added support to migrating IP from interface when adding to bridge

View File

@ -1,5 +0,0 @@
---
features:
- Added the ability to customize the Heat stack properties
fixes:
- Added notes on working around Molecule bug.

View File

@ -1,6 +0,0 @@
---
fixes:
- Live migrations will take longer than expected because the default value of
the option ``live_migration_events`` regressed to ``false`` since the
addition of this value was forgotten. They should now complete on time with
no network outages.

View File

@ -1,4 +0,0 @@
---
fixes:
- Start ignoring ``tbr`` interfaces inside ``node-exporter`` which are used by
trunk interfaces with Neutron.

View File

@ -1,3 +0,0 @@
---
features:
- Added ``openstack-exporter`` with alertings.

View File

@ -1,3 +0,0 @@
---
features:
- Added ability to create overrides for Prometheus monitoring.

View File

@ -1,3 +0,0 @@
---
features:
- Add support for multiple CSIs including PowerStore

View File

@ -1,4 +0,0 @@
---
features:
- Add jobs to promote the generated artifact to the tarballs server in order
to make it easy to pull in latest version.

View File

@ -1,4 +0,0 @@
---
fixes:
- Added "provides" to wheels jobs in order to allow passing the artifact to
image build jobs.

View File

@ -1,9 +0,0 @@
---
features:
- Added a playbook to automatically generate all secrets for all roles for
those which are not already defined.
upgrade:
- When upgrading to this version, you'll need to make sure that you destroy
your existing Molecule testing environment before convering again since
it is now using automatically generated secrets instead of hard-coded
secrets. The secrets are stored inside the ``MOLECULE_EPHEMERAL_DIRECTORY``.

View File

@ -1,4 +0,0 @@
---
features:
- Added automatic SSH key generation for workspace, as well as cold & live
migration support by enabling SSH keys.

View File

@ -1,3 +0,0 @@
---
features:
- Added tempest images built from the master branch.

View File

@ -1,3 +0,0 @@
---
fixes:
- Added "upper-constraints.txt" to wheels archive.

View File

@ -1,3 +0,0 @@
---
features:
- Added Zuul jobs for building wheels and publishing them

View File

@ -1,4 +0,0 @@
---
features:
- Added playbook to allow for generating workspace for deployment and
integrate it into Molecule in order to make sure we always test it.

View File

@ -1,3 +0,0 @@
---
features:
- Added Zuul artifacts with built collections for all commits.

View File

@ -1,4 +0,0 @@
---
features:
- |
Load the kubectl & helm auto complete in the .bashrc file

Some files were not shown because too many files have changed in this diff Show More