This proposal will add support to Kolla-Ansible for Cloudkitty
InfluxDB storage system deployment. The feature of InfluxDB as the
storage backend for Cloudkitty was created with the following commit
https://github.com/openstack/cloudkitty/commit/
c4758e78b49386145309a44623502f8095a2c7ee
Problem Description
===================
With the addition of support for InfluxDB in Cloudkitty, which is
achieving general availability via Stein release, we need a method to
easily configure/support this storage backend system via Kolla-ansible.
Kolla-ansible is already able to deploy and configure an InfluxDB
system. Therefore, this proposal will use the InfluxDB deployment
configured via Kolla-ansible to connect to CloudKitty and use it as a
storage backend.
If we do not provide a method for users (operators) to manage
Cloudkitty storage backend via Kolla-ansible, the user has to execute
these changes/configurations manually (or via some other set of
automated scripts), which creates distributed set of configuration
files, "configurations" scripts that have different versioning schemas
and life cycles.
Proposed Change
===============
Architecture
------------
We propose a flag that users can use to make Kolla-ansible configure
CloudKitty to use InfluxDB as the storage backend system. When
enabling this flag, Kolla-ansible will also enable the deployment of
the InfluxDB via Kolla-ansible automatically.
CloudKitty will be configured accordingly to [1] and [2]. We will also
externalize the "retention_policy", "use_ssl", and "insecure", to
allow fine granular configurations to operators. All of these
configurations will only be used when configured; therefore, when they
are not set, the default value/behavior defined in Cloudkitty will be
used. Moreover, when we configure "use_ssl" to "true", the user will
be able to set "cafile" to a custom trusted CA file. Again, if these
variables are not set, the default ones in Cloudkitty will be used.
Implementation
--------------
We need to introduce a new variable called
`cloudkitty_storage_backend`. Valid options are `sqlalchemy` or
`influxdb`. The default value in Kolla-ansible is `sqlalchemy` for
backward compatibility. Then, the first step is to change the
definition for the following variable:
`/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca |
bool }}"`
We also need to enable InfluxDB when CloudKitty is configured to use
it as the storage backend. Afterwards, we need to create tasks in
CloudKitty configurations to create the InfluxDB schema and configure
the configuration files accordingly.
Alternatives
------------
The alternative would be to execute the configurations manually or
handle it via a different set of scripts and configurations files,
which can become cumbersome with time.
Security Impact
---------------
None identified by the author of this spec
Notifications Impact
--------------------
Operators that are already deploying CloudKitty with InfluxDB as
storage backend would need to convert their configurations to
Kolla-ansible (if they wish to adopt Kolla-ansible to execute these
tasks).
Also, deployments (OpenStack environments) that were created with
Cloudkitty using storage v1 will need to migrate all of their data to
V2 before enabling InfluxDB as the storage system.
Other End User Impact
---------------------
None.
Performance Impact
------------------
None.
Other Deployer Impact
---------------------
New configuration options will be available for CloudKitty.
* cloudkitty_storage_backend
* cloudkitty_influxdb_retention_policy
* cloudkitty_influxdb_use_ssl
* cloudkitty_influxdb_cafile
* cloudkitty_influxdb_insecure_connections
* cloudkitty_influxdb_name
Developer Impact
----------------
None
Implementation
==============
Assignee
--------
* `Rafael Weingärtner <rafaelweingartne>`
Work Items
----------
* Extend InfluxDB "enable/disable" variable
* Add new tasks to configure Cloudkitty accordingly to these new
variables that are presented above
* Write documentation and release notes
Dependencies
============
None
Documentation Impact
====================
New documentation for the feature.
References
==========
[1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2`
[2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection`
Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
The Hitachi NAS Platform iSCSI driver was marked as not supported by
Cinder in the Ocata realease[1].
[1] https://review.opendev.org/#/c/444287/
Change-Id: I1a25789374fddaefc57bc59badec06f91ee6a52a
Closes-Bug: #1832821
This change defaults freezer to use mariadb as default backend for database
and adds elasticsearch as an optional backend due to the requirement of
freezer to use elasticsearch version 2.3.0. The default elasticsearch in
kolla-ansible is 5.6.x and that doesn't work with freezer.
Added needed options to the elasticsearch backend like:
- protocol
- address
- port
- number of replicas
Change-Id: I88616c285bdb297fd1f738846ddffe1b08a7a827
Signed-off-by: Marek Svensson <marek@marex.st>
* When using redis as the backend of osprofiler, it cannot connect to
redis because the redis_connection_string is incorrect.
* Let other places that use redis also use this variable.
Change-Id: I14de6597932d05cd7f804a35c6764ba4ae9087cd
Closes-Bug: #1833200
Signed-off-by: ZijianGuo <guozijn@gmail.com>
The project has been retired and there will be no Train release [1].
This patch removes Neutron LBaaS support in Kolla.
[1] https://review.opendev.org/#/c/658494/
Change-Id: Ic0d3da02b9556a34d8c27ca21a1ebb3af1f5d34c
Qinling is an OpenStack project to provide "Function as a Service".
This project aims to provide a platform to support serverless functions.
Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
Implements: blueprint ansible-qinling-support
Story: 2005760
Task: 33468
Right now every controller rotates fernet keys. This is nice because
should any controller die, we know the remaining ones will rotate the
keys. However, we are currently over-rotating the keys.
When we over rotate keys, we get logs like this:
This is not a recognized Fernet token <token> TokenNotFound
Most clients can recover and get a new token, but some clients (like
Nova passing tokens to other services) can't do that because it doesn't
have the password to regenerate a new token.
With three controllers, in crontab in keystone-fernet we see the once a day
correctly staggered across the three controllers:
ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
0 0 * * * /usr/bin/fernet-rotate.sh
ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
0 8 * * * /usr/bin/fernet-rotate.sh
ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
0 16 * * * /usr/bin/fernet-rotate.sh
Currently with three controllers we have this keystone config:
[token]
expiration = 86400 (although, keystone default is one hour)
allow_expired_window = 172800 (this is the keystone default)
[fernet_tokens]
max_active_keys = 4
Currently, kolla-ansible configures key rotation according to the following:
rotation_interval = token_expiration / num_hosts
This means we rotate keys more quickly the more hosts we have, which doesn't
make much sense.
Keystone docs state:
max_active_keys =
((token_expiration + allow_expired_window) / rotation_interval) + 2
For details see:
https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
Rotation is based on pushing out a staging key, so should any server
start using that key, other servers will consider that valid. Then each
server in turn starts using the staging key, each in term demoting the
existing primary key to a secondary key. Eventually you prune the
secondary keys when there is no token in the wild that would need to be
decrypted using that key. So this all makes sense.
This change adds new variables for fernet_token_allow_expired_window and
fernet_key_rotation_interval, so that we can correctly calculate the
correct number of active keys. We now set the default rotation interval
so as to minimise the number of active keys to 3 - one primary, one
secondary, one buffer.
This change also fixes the fernet cron job generator, which was broken
in the following cases:
* requesting an interval of more than 1 day resulted in no jobs
* requesting an interval of more than 60 minutes, unless an exact
multiple of 60 minutes, resulted in no jobs
It should now be possible to request any interval up to a week divided
by the number of hosts.
Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
Closes-Bug: #1809469
When integrating 3rd party component into openstack with kolla-ansible,
maybe have to mount some extra volumes to container.
Change-Id: I69108209320edad4c4ffa37dabadff62d7340939
Implements: blueprint support-extra-volumes
Make an early start on the TODOs for the Train cycle.
1. Remove the task that removes the vitrage_collector container, which
was added in the Stein cycle to clean up this container which is no
longer deployed.
2. Remove globals.yml configuration in CI to disable Heat for upgrade
jobs. Heat is now enabled in the previous release (Stein).
3. Remove the deprecated variable cinder_iscsi_helper, which was renamed
to cinder_target_helper in Stein.
Change-Id: I774bf395e0bdd4db9c20c6289a22cf059fa42e1a
Now that the stable/stein branch has been cut, we can set the previous
release to Stein. This is done in kolla-ansible for rolling upgrades,
and in CI configuration for upgrade tests.
Change-Id: I87269738db9521fc22a6ce3aee67d9ab00d47e2a
Adds support to seperate Swift access and replication traffic from other storage traffic.
In a deployment where both Ceph and Swift have been deployed,
this changes adds functionalality to support optional seperation
of storage network traffic. This adds two new network interfaces
'swift_storage_interface' and 'swift_replication_interface' which maintain
backwards compatibility.
The Swift access network interface is configured via 'swift_storage_interface',
which defaults to 'storage_interface'. The Swift replication network
interface is configured via 'swift_replication_interface', which
defaults to 'swift_storage_interface'.
If a separate replication network is used, Kolla Ansible now deploys separate
replication servers for the accounts, containers and objects, that listen on
this network. In this case, these services handle only replication traffic, and
the original account-, container- and object- servers only handle storage
user requests.
Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
This patch implements the support for the elasticsearch-exporter in
kolla-ansible
The configuration and prechecks are reused from the other exporters
Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972
Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
Because kolla-ansible not have cyborg so should add it.
Implements: blueprint add-cyborg-to-kolla-ansible
Depend-On: I497e67e3a754fccfd2ef5a82f13ccfaf890a6fcd
Change-Id: I6f7ae86f855c5c64697607356d0ff3161f91b239
This adds a horizon_listen_port option, which defaults to horizon_port
for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I1e47e9524fd9c41bbb2cd2fc80560e53d9296599
Implements: blueprint service-hostnames
This allows swift service endpoints to use custom hostnames, and adds the
following variables:
* swift_internal_fqdn
* swift_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a swift_proxy_server_listen_port option, which defaults to
swift_proxy_server_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
While we're in here, use the ``internal_protocol`` variable for the swift
endpoint in cinder's swift backup driver configuration, instead of hardcoding
to ``http``.
Change-Id: Ibc01618383c26e16c0067f7f6b9cf5160d968d1e
Implements: blueprint service-hostnames
This allows gnocchi service endpoints to use custom hostnames, and adds the
following variables:
* gnocchi_internal_fqdn
* gnocchi_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a gnocchi_api_listen_port option, which defaults to
gnocchi_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: Ic9a0f8130b19ed77987f45fd0e824b82ea7a7328
Implements: blueprint service-hostnames
This allows senlin service endpoints to use custom hostnames, and adds the
following variables:
* senlin_internal_fqdn
* senlin_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a senlin_api_listen_port option, which defaults to
senlin_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I26e8cfdde54aaf0648473f54136cf5350f356917
Implements: blueprint service-hostnames
This allows aodh service endpoints to use custom hostnames, and adds the
following variables:
* aodh_internal_fqdn
* aodh_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a aodh_api_listen_port option, which defaults to
aodh_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: Iee08b725d066bfbe543d9319c47941d59c22212a
Implements: blueprint service-hostnames
This allows octavia service endpoints to use custom hostnames, and adds the
following variables:
* octavia_internal_fqdn
* octavia_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a octavia_api_listen_port option, which defaults to
octavia_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I1310eb5573a469b1a0e9549e853734455307a8b3
Implements: blueprint service-hostnames
This allows heat service endpoints to use custom hostnames, and adds the
following variables:
* heat_internal_fqdn
* heat_external_fqdn
* heat_cfn_internal_fqdn
* heat_cfn_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds heat_api_listen_port and heat_api_cfn_listen_port
options, which default to heat_api_port and heat_api_cfn_port for
backward compatibility.
These options allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: Ifb8bb55799703883d81be6a55641be7b2474fd4e
Implements: blueprint service-hostnames
This allows barbican service endpoints to use custom hostnames, and adds the
following variables:
* barbican_internal_fqdn
* barbican_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a barbican_api_listen_port option, which defaults to
barbican_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I1807a9c8b64d737d0e278bb3e925fecb4fadfb08
Implements: blueprint service-hostnames
This allows ironic service endpoints to use custom hostnames, and adds the
following variables:
* ironic_internal_fqdn
* ironic_external_fqdn
* ironic_inspector_internal_fqdn
* ironic_inspector_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds ironic_api_listen_port and ironic_inspector_listen_port
options, which default to ironic_api_port and ironic_inspector_port for
backward compatibility.
These options allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I45b175e85866b4cfecad8451b202a5a27f888a84
Implements: blueprint service-hostnames
This allows designate service endpoints to use custom hostnames, and adds
the
following variables:
* designate_internal_fqdn
* designate_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a designate_api_listen_port option, which defaults to
designate_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I654bb3d1109b96cbaff6f450655cd65f349a94e6
Implements: blueprint service-hostnames
This allows cinder service endpoints to use custom hostnames, and adds the
following variables:
* cinder_internal_fqdn
* cinder_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a cinder_api_listen_port option, which defaults to
cinder_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I2a5036456afac6135dca3723ae754ea9f8bc8475
Implements: blueprint service-hostnames
We're duplicating code to build the keystone URLs in nearly every
config, where we've already done it in group_vars. Replace the
redundancy with a variable that does the same thing.
Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
The iscsi_helper option was deprecated in favour of target_helper in
Queens, and will be removed in the Stein release.
This also renames the cinder_iscsi_helper variable to
cinder_target_helper, deprecating but still supporting the former name
until the Train release.
Change-Id: Ie38c09b2dd8598f62b0733c8444eec5f6ce3daac
Adds a new flag, 'enable_openstack_core', which defaults to 'yes'.
Setting this flag to 'no' will disable the core OpenStack services,
including Glance, Heat, Horizon, Keystone, Neutron, and Nova.
Improves the default configuration of OpenStack Ironic when used in
standalone mode. In particular, configures a noauth mode when Keystone
is disabled, and allows the iPXE server to be used for provisioning as
well as inspection if Neutron is disabled.
Documentation for standalone ironic will be updated separately.
This patch was developed and tested using Bikolla [1].
[1] https://github.com/markgoddard/bikolla
Change-Id: Ic47f5ad81b8126a51e52a445097f7950dba233cd
Implements: blueprint standalone-ironic