Recently a patch [1] was merged to stop adding the octavia user to the
admin project, and remove it on upgrade. However, the octavia
configuration was not updated to use the service project, causing load
balancer creation to fail.
There is also an issue for existing deployments in simply switching to
the service project. While existing load balancers appear to continue to
work, creating new load balancers fails due to the security group
belonging to the admin project. At a minimum, the deployer needs to
create a security group in the service project, and update
'octavia_amp_secgroup_list' to match its ID. Ideally the flavor and
network would also be recreated in the service project, although this
does not seem to impact operation and will result in downtime for
existing Amphorae.
This change adds a new variable, 'octavia_service_auth_project', that
can be used to set the project. The default in Ussuri is 'service',
switching to the new behaviour. For backports of this patch it should be
switched to 'admin' to maintain compatibility.
If a deployer sets 'octavia_service_auth_project' to 'admin', the
octavia user will be assigned the admin role in the admin project, as
was done previously.
Closes-Bug: #1882643
Related-Bug: #1873176
[1] https://review.opendev.org/720243/
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I1efd0154ebaee69373ae5bccd391ee9c68d09b30
The removal of Kolla Ceph deploy [1] broke gnocchi & external Ceph
integration - the variable gnocchi_pool_name is referenced in the config
template, but should now be ceph_gnocchi_pool_name.
This change fixes the issue.
Reported by Nick Wilson.
[1] https://review.opendev.org/#/c/704309/12/ansible/roles/gnocchi/defaults/main.yml
Change-Id: I7089781c0c4d7bce8a44cb8b1fca847dd0b7efd1
Closes-Bug: #1877974
Nova cells support introduced a slight regression that triggers
odd behaviour when we tried switching to Apache (httpd) [1].
Bootstrap no longer applied permissions recursively to all log
files, creating a discrepancy between normal and bootstrap runs
and also Nova and other services such as Cinder (regarding
bootstrap logging).
This patch fixes it.
Backport to Train.
Not creating reno nor a bug record because it does not affect
any current standard usage in any currently known way.
Note this only really hides (standardizes?) the global issue that
we don't control file permissions on newly created files too well.
[1] https://review.opendev.org/724793
Change-Id: I35e9924ccede5edd2e1307043379aba944725143
Needed-By: https://review.opendev.org/724793
Switch URL composition from using VIP to FQDN to connect with Kibana and
Elasticsearch services.
Change-Id: I5d559ead1d6d5e928e76bb685e0f730868fd7b89
Closes-Bug: #1862419
This was addressed in I21689e22870c2f6206e37c60a3c33e19140f77ff but
accidentally reverted in I4f74bfe07d4b7ca18953b11e767cf0bb94dfd67e.
Change-Id: Id5fc458b0ca54bddfe9a43cb315dbcfeb2142395
Fixes:
- SB/NB DB address format (single host) for SB/NB DB daemon
- SB/NB DB address format (all hosts) for Neutron / northd /
ovn-ovs bootstrap
- OVN tests
Change-Id: I539773c48f89b731d068280c228ce11782bf5788
Closes-Bug: #1875222
This patch introduces an optional backend encryption for Horizon and
Placement services. When used in conjunction with enabling TLS for
service API endpoints, network communcation will be encrypted end to
end, from client through HAProxy to the Horizon and Placement services.
Change-Id: I9cb274141c95aea20e733baa623da071b30acf2d
Partially-Implements: blueprint add-ssl-internal-network
Add TLS support for Glance api using HAProxy to perform TLS termination.
Change-Id: I77051baaeb5d3f7dd9002262534e7d35f3926809
Partially-Implements: blueprint add-ssl-internal-network
Zun has a new component "zun-cni-daemon" which should be
deployed in every compute nodes. It is basically an implementation
of CNI (Container Network Interface) that performs the neutron
port binding.
If users is using the capsule (pod) API, the recommended deployment
option is using "cri" as capsule driver. This is basically to use
a CRI runtime (i.e. CRI plugin for containerd) for supporting
capsules (pods). A CRI runtime needs a CNI plugin which is what
the "zun-cni-daemon" provides.
The configuration is based on the Zun installation guide [1].
It consits of the following steps:
* Configure the containerd daemon in the host. The "zun-compute"
container will use grpc to communicate with this service.
* Install the "zun-cni" binary at host. The containerd process
will invoke this binary to call the CNI plugin.
* Run a "zun-cni-daemon" container. The "zun-cni" binary will
communicate with this container via HTTP.
Relevant patches:
Blueprint: https://blueprints.launchpad.net/zun/+spec/add-support-cri-runtime
Install guide: https://review.opendev.org/#/c/707948/
Devstack plugin: https://review.opendev.org/#/c/705338/
Kolla image: https://review.opendev.org/#/c/708273/
[1] https://docs.openstack.org/zun/latest/install/index.html
Depends-On: https://review.opendev.org/#/c/721044/
Change-Id: I9c361a99b355af27907cf80f5c88d97191193495
The octavia service communicates to the barbican service with
public endpoint_type by default[1], it should use internal
like other services.
[1] 0056b5175f/octavia/common/config.py (L533-L537)
Closes-Bug: #1875618
Change-Id: I90d2b0aeac090a3e2366341e260232fc1f0d6492
Adds necessary "region_name" to octavia.conf when
"enable_barbican" is set to "true".
Closes-Bug: #1867926
Change-Id: Ida61cef4b9c9622a5e925bac4583fba281469a39
Since haproxy is orchestrated via site.yml in a single play,
it does not need flushing handlers as handlers run will
happen at the end of this play.
Change-Id: Ia3743575da707325be93c39b4a2bcae9211cacb2
Related-Bug: #1864810
Closes-Bug: #1875228
Follow-up on [1] "Avoid multiple haproxy restarts after
reconfiguration".
There is no need to duplicate handler name in listen.
The issue was because we had two handlers with the same
name in the same environment.
This causes Ansible not to mark handler as already run.
[1] https://review.opendev.org/708385
Change-Id: I5425a8037b6860ef71bce59becff8dfe5b601d4c
Related-Bug: #1864810
Update Skydive Analyzer's configuration to use Keystone as its backend
for authenticating users. Any user with a role in the project defined
by the variable skydive_admin_tenant_name will be able to access
Skydive.
Change-Id: I64c811d5eb72c7406fd52b649fa00edaf2d0c07b
Closes-Bug: 1870903
This patch introduces an optional backend encryption for Heat
service. When used in conjunction with enabling TLS for service API
endpoints, network communcation will be encrypted end to end, from
client through HAProxy to the Heat service.
Change-Id: Ic12f7574135dcaed2a462e902c775a55176ff03b
Partially-Implements: blueprint add-ssl-internal-network
Depends-On: https://review.opendev.org/722028/
The haproxy role and the site.yml file calls the
haproxy-config role to provide configuration for individual
services.
If the configuration within a service changes, the haproxy
container is restarted.
If the configuration in n services changes, there will be n
restarts. This is not necessary, a restart at the end is
sufficient.
By removing the handler from the haproxy-config role and
using the listen parameter in the handler of the haproxy role,
the handler is executed only once.
Change-Id: I535fe67579fb748093bb4b30a6bd31b81e021a1b
Closes-Bug: #1864810
Drops support for creating Python 2 virtualenvs in bootstrap-servers,
and looking for a python2 interpreter in the kolla-ansible script.
Also forces the use of Python 3 as the remote interpreter in CI on
Debian and Ubuntu hosts, since they typically symlink the unversioned
interpreter to python2.7.
Change-Id: Id0e977de381e7faafed738674a140ba36184727e
Partially-Implements: blueprint drop-py2-support
Kolla Ansible was missing vitrage-persistor service
required by Vitrage for data storage.
Depends on fixing availability of Kolla image.
Change-Id: I8158ba66b8b624f6bcb89da9c990a30a68b7187b
Depends-On: Id5e143636f9a81e7294b775f3d8b9134bee58054
Closes-Bug: #1869319