Replaced "kolla_external_fqdn_cacert" and "kolla_internal_fqdn_cacert" with
"kolla_admin_openrc_cacert". OS_CACERT is now set to the value of
"kolla_admin_openrc_cacert" in the generated admin-openrc.sh file.
Change-Id: If195d5402579cee9a14b91f63f5fde84eb84cccf
Partially-Implements: blueprint add-ssl-internal-network
Depends-On: https://review.opendev.org/#/c/731344/
normally, api_interface is treated as internal and security network plane,
use it as default migration_interface is more meaningful.
Change-Id: Ib9f4bcc19147a49dc09bd905dcd06be165a91b5e
Zun has a new component "zun-cni-daemon" which should be
deployed in every compute nodes. It is basically an implementation
of CNI (Container Network Interface) that performs the neutron
port binding.
If users is using the capsule (pod) API, the recommended deployment
option is using "cri" as capsule driver. This is basically to use
a CRI runtime (i.e. CRI plugin for containerd) for supporting
capsules (pods). A CRI runtime needs a CNI plugin which is what
the "zun-cni-daemon" provides.
The configuration is based on the Zun installation guide [1].
It consits of the following steps:
* Configure the containerd daemon in the host. The "zun-compute"
container will use grpc to communicate with this service.
* Install the "zun-cni" binary at host. The containerd process
will invoke this binary to call the CNI plugin.
* Run a "zun-cni-daemon" container. The "zun-cni" binary will
communicate with this container via HTTP.
Relevant patches:
Blueprint: https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime
Install guide: https://review.opendev.org/#/c/707948/
Devstack plugin: https://review.opendev.org/#/c/705338/
Kolla image: https://review.opendev.org/#/c/708273/
[1] https://docs.openstack.org/zun/latest/install/index.html
Depends-On: https://review.opendev.org/#/c/721044/
Change-Id: I9c361a99b355af27907cf80f5c88d97191193495
etcd via tooz does not support group membership required by
Designate coordination.
The best k-a can do is not to configure etcd in Designate.
Change-Id: I2f64f928e730355142ac369d8868cf9f65ca357e
Closes-bug: #1872205
Related-bug: #1840070
This patch introduces an optional backend encryption for Keystone
service. When used in conjunction with enabling TLS for service API
endpoints, network communcation will be encrypted end to end, from
client through HAProxy to the Keystone service.
Change-Id: I6351147ddaff8b2ae629179a9bc3bae2ebac9519
Partially-Implements: blueprint add-ssl-internal-network
Docker registry password is, by default, sourced from
passwords.yml file.
Cleans up globals.yml to make it clearer.
Also follows the "present defaults" behaviour of the other vars.
Change-Id: Icc993e82a6a435f948e3d17e410eb14717cb0e2d
This is useful to people who manage their Prometheus Server
externally to Kolla Ansible, or want to use the exporters with
another framework such as Monasca.
Change-Id: Ie3f61e2e186c8e77e21a7b53d2bd7d2a27eee18e
While supporting both CentOS 7 and 8, we used the tag 'master-centos8'
for CentOS 8 images. We are now ready to drop CentOS 7 support, and
Kolla is switching to publish CentOS 8 images using the master tag on
the master branch, so we should use this.
Depends-On: https://review.opendev.org/713265
Partially-Implements: blueprint centos-rhel-8
Change-Id: I07d2c285e3214a6dc827a8e8eacf263048ee099b
This daemon is an additional piece of functionality supported by Gnocchi
and the general pattern in KA is to disable such things unless the user
explicitly wants them. This also helps avoid having to set the
resource_id, user_id, and project_id variables for Gnocchi if you don't
care about this daemon.
Change-Id: I5f14cee4b0bb0d781b1ff53200d11de972d20c82
Kolla-Ansible Ceph deployment mechanism has been deprecated in Train [1].
This change removes the Ansible code and associated CI jobs.
[1]: https://review.opendev.org/669214
Change-Id: Ie2167f02ad2f525d3b0f553e2c047516acf55bc2
Generate both internal and external self signed TLS certificates.
Duplicate the certificate if internal and external VIPs are the same.
Change-Id: I16b345c0b29ff13e042eed8798efe644e0ad2c74
Partially-Implements: blueprint custom-cacerts
When kolla_copy_ca_into_containers is set to "yes", the Certificate
Authority in /etc/kolla/certificates will be copied into service
containers to enable trust for that CA. This is especially useful when
the CA is self signed, and would not be trusted by default.
Partially-Implements: blueprint custom-cacerts
Change-Id: I4368f8994147580460ebe7533850cf63a419d0b4
For the CentOS 7 to 8 transition, we will have a period where both
CentOS 7 and 8 images are available. We differentiate these images via a
tag - the CentOS 8 images will have a tag of train-centos8 (or
master-centos8 temporarily).
To achieve this, and maintain backwards compatibility for the
openstack_release variable, we introduce a new 'openstack_tag' variable.
This variable is based on openstack_release, but has a suffix of
'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
value of '-centos8'.
Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
Partially-Implements: blueprint centos-rhel-8
Variable added to evaluate "ENABLE_MONASCA" env for 'kolla/horizon'. In
case 'enable_horizon_monasca' is true, 'policy_item' would be called for
Monasca.
Change-Id: Ie9ecb8ab5d4e74af9b83a5b00ccced5b630ab1ed
Implements: blueprint monasca-ui
Signed-off-by: Hamed Bahadorzadeh <h.bahadorzadeh@gmail.com>
This allows users to supply an Elasticsearch Curator actions file
to manage log retention [1]. Curator then runs on a cron job, which
defaults to every day. A default curator actions file is provided,
which can be customised by the end user if required.
[1] https://www.elastic.co/guide/en/elasticsearch/client/curator/current/actionfile.html
Change-Id: Ide9baea9190ae849e61b9d8b6cff3305bdcdd534
Adds support for configuration of the Docker client timeout via
'docker_client_timeout'.
This change also increases the default timeout to 120 seconds, as we
sometimes see timeouts in CI and heavily loaded or underpowered
environments. Increasing 'docker_client_timeout' further may be helpful
in cases where Docker reports 'Read timed out'.
Change-Id: I73745771078cb2c0ebae2b1d87ba2c4c12958d82
Closes-Bug: #1809844
Adds rabbitmq_server_additional_erl_args variable which
is appended to RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
environment variable to RabbitMQ server startup script.
This can be used to configure the schedulers.
Docs attached.
Change-Id: Id683c8cc6dac61354ffd94f3b460335b42136ba2
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Related-bug: #1846467
This also enables Placement when Zun is enabled like Kolla Ansible
already does with Nova.
Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9
Closes-bug: #1840573
This patch adds initial support for deploying multiple Nova cells.
Splitting a nova-cell role out from the Nova role allows a more granular
approach to deploying and configuring Nova services.
A new enable_cells flag has been added that enables the support of
multiple cells via the introduction of a super conductor in addition to
cell-specific conductors. When this flag is not set (the default), nova
is configured in the same manner as before - with a single conductor.
The nova role now deploys the global services:
* nova-api
* nova-scheduler
* nova-super-conductor (if enable_cells is true)
The nova-cell role handles services specific to a cell:
* nova-compute
* nova-compute-ironic
* nova-conductor
* nova-libvirt
* nova-novncproxy
* nova-serialproxy
* nova-spicehtml5proxy
* nova-ssh
This patch does not support using a single cell controller for managing
more than one cell. Support for sharing a cell controller will be added
in a future patch.
This patch should be backwards compatible and is tested by existing CI
jobs. A new CI job has been added that tests a multi-cell environment.
ceph-mon has been removed from the play hosts list as it is not
necessary - delegate_to does not require the host to be in the play.
Documentation will be added in a separate patch.
Partially Implements: blueprint support-nova-cells
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Add coordination backend configuration to designate.conf which is
required in multinode environments. Fixes warning from designate:
WARNING designate.coordination [-] No coordination backend configured,
assuming we are the only worker. Please configure a coordination backend
Change-Id: I23c4d2de7e3f9368795c423000a4f9a6c3a431e2
Closes-Bug: #1843842
Related-Bug: #1840070