Option "trove_auth_url/os_region_name" from group "DEFAULT" is deprecated.
Use option "auth_url/region_name" from group service_credentials
Change-Id: I15d6891582c92c7fc813f280a2b47ebaaca77eba
The use of default(omit) is for module parameters, not templates. We
define a default value for openstack_cacert, so it should never be
undefined anyway.
Change-Id: Idfa73097ca168c76559dc4f3aa8bb30b7113ab28
Include a reference to the globally configured Certificate Authority to
all services. Services use the CA to verify HTTPs connections.
Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc
Partially-Implements: blueprint support-trusted-ca-certificate-file
After all of the discussions we had on
"https://review.opendev.org/#/c/670626/2", I studied all projects that
have an "oslo_messaging" section. Afterwards, I applied the same method
that is already used in "oslo_messaging" section in Nova, Cinder, and
others. This guarantees that we have a consistent method to
enable/disable notifications across projects based on components (e.g.
Ceilometer) being enabled or disabled. Here follows the list of
components, and the respective changes I did.
* Aodh:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Congress:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Cinder:
It was already properly configured.
* Octavia:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Heat:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Ceilometer:
Ceilometer publishes some messages in the rabbitMQ. However, the
default driver is "messagingv2", and not ''(empty) as defined in Oslo;
these configurations are defined in ceilometer/publisher/messaging.py.
Therefore, we do not need to do anything for the
"oslo_messaging_notifications" section in Ceilometer
* Tacker:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Neutron:
It was already properly configured.
* Nova
It was already properly configured. However, we found another issue
with its configuration. Kolla-ansible does not configure nova
notifications as it should. If 'searchlight' is not installed (enabled)
the 'notification_format' should be 'unversioned'. The default is
'both'; so nova will send a notification to the queue
versioned_notifications; but that queue has no consumer when
'searchlight' is disabled. In our case, the queue got 511k messages.
The huge amount of "stuck" messages made the Rabbitmq cluster
unstable.
https://bugzilla.redhat.com/show_bug.cgi?id=1478274https://bugs.launchpad.net/ceilometer/+bug/1665449
* Nova_hyperv:
I added the same configurations as in Nova project.
* Vitrage
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Searchlight
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Ironic
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Glance
It was already properly configured.
* Trove
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Blazar
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Sahara
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Watcher
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Barbican
I created a mechanism similar to what we have in Cinder, Nova,
and others. I also added a configuration to 'keystone_notifications'
section. Barbican needs its own queue to capture events from Keystone.
Otherwise, it has an impact on Ceilometer and other systems that are
connected to the "notifications" default queue.
* Keystone
Keystone is the system that triggered this work with the discussions
that followed on https://review.opendev.org/#/c/670626/2. After a long
discussion, we agreed to apply the same approach that we have in Nova,
Cinder and other systems in Keystone. That is what we did. Moreover, we
introduce a new topic "barbican_notifications" when barbican is
enabled. We also removed the "variable" enable_cadf_notifications, as
it is obsolete, and the default in Keystone is CADF.
* Mistral:
It was hardcoded "noop" as the driver. However, that does not seem a
good practice. Instead, I applied the same standard of using the driver
and pushing to "notifications" queue if Ceilometer is enabled.
* Cyborg:
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.
* Murano
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Senlin
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Manila
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Zun
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.
* Designate
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
* Magnum
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components
Closes-Bug: #1838985
Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
We're duplicating code to build the keystone URLs in nearly every
config, where we've already done it in group_vars. Replace the
redundancy with a variable that does the same thing.
Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Co-Authored-By: confi-surya <singh.surya64mnnit@gmail.com>
Change-Id: Ifd8527d404f1df807ae8196eac2b3849911ddc26
Closes-Bug: #1761907
Currently osprofiler only choose elasticsearch,
which is only supported on x86.
On other platform like aarch64 osprofiler can
not be used since no elasticsearch package.
Enable osprofiler by enable_osprofiler: "yes",
which choose elasticsearch by default.
Choose redis by enable_redis: "yes" & osprofiler_backend: "redis"
On platform without elasticsearch support like aarch64
set enable_elasticsearch: "no"
Change-Id: I68fe7a33e11d28684962fc5d0b3d326e90784d78
In trove-taskmanager.conf a typo has been
introduced for nova_proxy_admin_tenant_name
option.
It currently is:
"nova_proxy_admin_tenant_name = services"
But should be:
"nova_proxy_admin_tenant_name = service"
Change-Id: I7b5d0ca4c6c994b6dd3c5de3f0a79637fda88177
Closes-Bug: #1770262
The nova_proxy_* options are not needed when Trove single tenant
functionnality is not used.
The current way how Kolla configure trove-taskmanager.conf is to use the
user tenant so the nova_proxy_* options are not required by default.
I added the "enable_trove_singletenant" option to enable the single tenant
functionnality if required and complete the configuration to make it works.
When enable_trove_singletenant is true, the below configuration will be
applied to trove-task-manager.conf configuration file:
nova_proxy_admin_pass = {{ trove_keystone_password }}
nova_proxy_admin_tenant_name = services
nova_proxy_admin_user = trove
remote_nova_client = \
trove.common.single_tenant_remote.nova_client_trove_admin
remote_cinder_client = \
trove.common.single_tenant_remote.cinder_client_trove_admin
remote_neutron_client = \
trove.common.single_tenant_remote.neutron_client_trove_admin
Change-Id: I9858acd9486a3f6a07c1edad14fde12f49df772b
Closes-Bug: #1743394
By default Trove is looking for "RegionOne", if the region is
different the os_region_name parameter needs to be defined withing
the Trove configuration files.
To solve this issue, we need to set "os_region_name" option in
trove-api, trove-taskmanager and trove-conductor configuration.
os_region_name = {{ openstack_region_name }}
Change-Id: I1397046d2c88ba50d01a65c48e021d3535fe39d2
Closes-bug: #1743402
Presently the taskmanager failed during a creation
of a trove cluster.
During the network ip checks, it didn't match the network.
The idea is to configure it to match all network name.
Finally this configuration is the same as for trove.conf.
Closes-bug: #1743395
Change-Id: I9284501424e6daa7d33d1590994bf231de71edd9
In several templates the variable topics is configured
between simple quotes.
It is better to remove them to use the openstack default value.
Change-Id: I418c714240b38b2853a5c746203eac31588e841a
The option neutron_endpoint_type is duplicate on these files:
- trove/templates/trove.conf.j2
- trove/templates/trove-taskmanager.conf.j2
We just have to remove one occurrence.
Change-Id: If5c91cf7b491966b1deac42c694af5995df9b11e
- remove useless *_url, which can be auto discovery
- use internalURL instead of publicURL which make it works when
using self-signed SSL certification.
- configure network_driver to Neutron
- add network_label_regex to match all network name
Change-Id: I5654dbf391db7076c82aede5c2a4f8b7530b8381
Closes-Bug: #1734039
This commit separates the messaging rpc and notify transports in order
to support separate and different oslo.messaging backends
This patch:
* add rpc and notify variables
* update service role conf templates
* add example to globals.yaml
* add release note
Implements: blueprint hybrid-messaging
Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
Ceph rgw can be used as object store instead of Swift.
This patch enable trove to use ceph rgw as object store.
Change-Id: I50b878078b7c62c1034a102d064dfa90a1357ee8
There is no swift_api_port.
swift_proxy_server_port should be the correct one.
Closes-Bug: #1689260
Change-Id: I63e0edb76603374b479eabf0199c4024ad3e2dbd
'v1' is missing in DEFAULT/swift_url property for trove.conf and
trove-taskmanager.conf file.
Closes-Bug: #1677362
Change-Id: I7f625b1ac665a26c4207c3cbb9b0238da82993d8