Due to the shortcoming of QManager implementation [1], in case of uWSGI
usage on metal hosts, the flow ends up with having the same
hostname/processname set, making services to fight over same file
under SHM.
In order to avoid this, we prepend the hostname with a service_name.
We can not change processname instead, since it will lead to the fight
between different processes of the same service.
[1] https://bugs.launchpad.net/oslo.messaging/+bug/2065922
Change-Id: I997c9692060b75f520cb94eacb69520b22a2f87a
During last release cycle oslo.messaging has landed [1] series of extremely
useful changes that are designed to implement modern messaging
techniques for rabbitmq quorum queues.
Since these changes are breaking and require queues being re-created,
it makes total sense to align these with migration to quorum queues by default.
[1] https://review.opendev.org/q/topic:%22bug-2031497%22
Change-Id: Id608c4ea0638c7ebd242399726d493f767b0f04b
In order to be able to globally enable notification reporting for all services,
without an need to have ceilometer deployed or bunch of overrides for each
service, we add `oslomsg_notify_enabled` variable that aims to control
behaviour of enabled notifications.
Presence of ceilometer is still respected by default and being referenced.
Potential usecase are various billing panels that do rely on notifications
but do not require presence of Ceilometer.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914144
Change-Id: Ia29caf1633d5467e870226763088065fde54f12d
This change implements and enables by default quorum support
for rabbitmq as well as providing default variables to globally tune
it's behaviour.
In order to ensure upgrade path and ability to switch back to HA queues
we change vhost names with removing leading `/`, as enabling quorum
requires to remove exchange which is tricky thing to do with running
services.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/899386
Change-Id: I8c418906b75edb633948f2c074170454a8f3e2d0
While <service>_galera_port is defined and used for db_setup
role, it's not in fact used in a connection string for oslo.db.
Change-Id: I8c21f0f61537c74813a5e29e2e370dc8c50df61f
By overriding the variable `designate_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the designate backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: Id5c18a7305c744a2b0252f62debb1b5654e4abd7
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: I7eafa6b989a2fd726369b3959b5e6ba024b82274
- Implemented new variable ``connection_recycle_time`` responsible for SQLAlchemy's connection recycling
- Set new default values for db pooling variables which are inherited from the global ones.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/819424
Change-Id: I28c64b44eadfd726e07cb7159e5d3d94fde106ed
With PKI role in place in most cases you don't need to explicitly
provide path to the CA file because PKI role ensures that CA is trusted
by the system overall. In the meanwhile in PyMySQL [1] you must either
provide CA file or cert/key or enable verify.
Since current behaviour is to provide path to the custom CA we expect
certificate being trusted overall. Thus we enable cert verification when
galera_use_ssl is True.
[1] 78f0cf99e5/pymysql/connections.py (L267)
Change-Id: Ic5b072d983c6d553d996a0a3bd708eec4c2137e5
This patch aims to add a prefix for memcached_server
on each role to give the ability for deployers to
override the location of memcached cluster. I.e users
wants to create a single memcached cluster with k8s
for each service.
We also add pymemcache based on [1]
[1] https://review.opendev.org/711429
Change-Id: I152220d4c440202de1a61b1aee891bdb659e5577
When using the designate public endpoint through haproxy, designate
is returning a paging link with the incorrect protocol. This change
is to make sure designate configurs oslo_middleware to parse the
X-Forwarded-Proto header set by haproxy.
Change-Id: Ia3288ccfc655a2814c36204a5cb381d3aa57e53e
Closes-Bug: #1713663
The notification driver setup was resulting in the driver and connection string
on the same line. This is caused by the case statement and how jinja formats
the template when a case statement is present. This change modifies how the
driver string is created using a ternary, which will eliminate the case
statement and render the value of the diver correctly.
Change-Id: I2645beb3eed1948f66f76fc7eb45e14923abfa78
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
With changes inside Designate merged about policy-incode, there
is no longer a default policy.json file in the venv, so we
need to change how we implement the file, and should only do
so if there is a config override configured for it.
If there is no policy override configured, but a policy.json
file is present, then it's likely left over from a previous
build. To ensure that we do not carry legacy configuration
files which override the policy-in-code we remove the legacy
file. This is done on restart to ensure that the policy still
applies until the code is updated.
Change-Id: Iea4d2029723529444b93d7deca58824e592d0e0f
This patch add the conditional inclusion of the notification
section of the service configuration. This ensures that oslo.messaging
notifications use the correct transport for deployments that have
separate rpc and notify messaging backends. For example, if the
transport_url is not provided in the notification section of the
service configuration, the transport_url specified in the default
section will be used instead.
This patch conditionally selects the notifier driver. The noop
driver will be selected when notification publishing is disabled.
The messagingv2 driver is selected when notification publishing is
enabled.
Change-Id: Ie5cc5499980f986f1a7e530adf42f4dcc43fbaca
Closes-Bug: #1794320
This change allow deployer to set project that will be owner of managed
resources like auto-created records and zones.
The owner is specified using project name and defaults to service
tenant.
Depends-On: https://review.openstack.org/628979
Change-Id: I620be82d890aaa547decc59f81f55345f7177900
This removes the systemd service templates and tasks from this role and
leverages a common systemd service role instead. This change removes a
lot of code duplication across all roles all without sacrificing features
or functionality. The intention of this change is to ensure uniformity and
reduce the maintenance burden on the community when sweeping changes are
needed.
The systemd journal would normally be populated with the standard out of
a service however with the use of uwsgi this is not actually happening
resulting in us only capturing the logs from the uwsgi process instead
of the service itself. This change implements journal logging in the
service config, which is part of OSLO logging.
OSLO logging docs found here: <https://docs.openstack.org/oslo.log/3.28.1/journal.html>
Change-Id: I9764f557007d97cfcbe02abf7166cce423b39a31
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Now that the v2.0 API has been removed, we don't have a reason to
include deployment instructions for two separate applications on
different ports.
Depends-On: I3df2c670beeb78baaa1515bcd27e8f2b0d95b3a9
Change-Id: I1e33bc9f67287d1f11e5b04be6bf39522bd5ec6a
This introduces oslo.messaging variables that define the RPC and Notify
transports for the OpenStack services. These parameters replace the
rabbitmq values and are used to generate the messaging transport_url for
the service. The association of the messaging backend server to the
oslo.messaging service will be transparent to the designate service.
This patch:
* Add oslo.messaging variables for RPC and Notify to defaults
* Update transport_url generation (add for notification)
* Add oslo.messaging to tests inventory
* Update tests
* Add releaes note
* Update README and example playbook
Change-Id: I620a13e1ea3c24c8bd31c02206613d37d769dd30
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Change-Id: Iad53f424b4b6e197653bf9ddb428a78f75126dbd
Implements: blueprint deprecate-auth-uri-option
When 'designate_galera_use_ssl' is True, use an encrypted connection to
the database using either a self-signed or user-provided CA certificate.
A new non-voting test has been added to verify that the role remains
functional when enabling SSL features.
Change-Id: I0d8e3b685faa8d394fd56f8fbfd9b492d2c2cb60
Partial-Bug: 1667789
Option "rpc_backend" from group "DEFAULT" is deprecated for removal
(Replaced by [DEFAULT]/transport_url). Its value may be silently
ignored in the future.
Change-Id: I85357a7ebae37c1e32f2f4558a195bff414a1341
Implements: blueprint deprecate-rpc-backend
Option "rabbit_use_ssl" from group "oslo_messaging_rabbit" is deprecated.
Use option "ssl" from group "oslo_messaging_rabbit".
Change-Id: I14863182c3b181156341e0b4d841f1ced3396f74
Implements: blueprint deprecate-rabbit-use-ssl
The systemd unit 'TimeoutSec' value which controls the time
between sending a SIGTERM signal and a SIGKILL signal when
stopping or restarting the service has been reduced from 300
seconds to 120 seconds. This provides 2 minutes for long-lived
sessions to drain while preventing new ones from starting
before a restart or a stop.
The 'RestartSec' value which controls the time between the
service stop and start when restarting has been reduced from
150 seconds to 2 seconds to make the restart happen faster.
These values can be adjusted by using the *_init_config_overrides
variables which use the config_template task to change template
defaults.
Change-Id: I20ea80146f5ed7fb10d49068039af4c19cc27f39
This creates a specific slice which all OpenStack services will operate
from. By creating an independent slice these components will be governed
away from the system slice allowing us to better optimise resource
consumption.
See the following for more information on slices:
* https://www.freedesktop.org/software/systemd/man/systemd.slice.html
See for following for more information on resource controls:
* https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
Tools like ``systemd-cgtop`` and ``systemd-cgls`` will now give us
insight into specific processes, process groups, and resouce consumption
in ways that we've not had access to before. To enable some of this reporting
the accounting options have been added to the [Service] section of the unit
file.
Change-Id: I5885643199db3ef618fc86f0cd80c14f1d7c89c4
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
designate-pool-manager and designate-zone-manager are deprecated
in favor of designate-worker and designate-producer. This enables
those services.
This does not functionally change the way Designate works, so the
associated config changes are minimal. This does remove some
cumbersome pool manager cache configuration that is no longer
needed, but wasn't being used anyway. It also simplifies the
Designate architecture by making the separation of duties easier
to grok, and enables simple horizontal scaling by starting more
``designate-worker`` processes.
Change-Id: I7adb2cea21136c18f36e0ed6404989d4e5de8e4d
This adds the ability for a user to configure the Designate
pools.yaml file inside of the role by specifying an attribute.
Because the data required is yaml, it's a nice mapping to specify
the yaml attribute and have it dumped directly to the pools.yaml
file.
This allows users to use attributes from other plays (perhaps setting
up some complex DNS infrastructure in their cloud) and insert them
into Designate without having to write their own template or supply
their own file.
This also invokes the `designate-manage` command to load the pools.yaml
file into the Designate database, and simplifies the tests that
do the pools.yaml needful.
Change-Id: I11a849898bf33aa6b8aa6605296ac7fd733d7c01
Change the 'designate_service_names' from a list to a dictionary mapping
of services, groups that install those services. This brings the
method into line with that used in the os_neutron role in order to
implement a more standardised method.
The init tasks have been updated to run once and loop through this
mapping rather than being included multiple times and re-run against
each host. This may potentially reduce role run times.
Currently the reload of upstart/systemd scripts may not happen if
only one script changes as the task uses a loop with only one result
register. This patch implements handlers to reload upstart/systemd
scripts to ensure that this happens when any one of the scripts
change.
The handler to reload the services now only tries to restart the
service if the host is in the group for the service according to the
service group mapping. This allows us to ensure that handler
failures are no longer ignored and that no execution time is wasted
trying to restart services which do not exist on the host.
Finally:
- Common variables shared by each service's template files have
been updated to use the service namespaced variables.
- Unused handlers have been removed.
- Unused variables have been removed.
Change-Id: I8b3df067d5e27711d9f962d74932c818a506e77a
OSLO logging currently defaults the 'use_stderr' option to True
which results duplicate logs in service daemon logs for both
upstart and systemd. To correct this issue the use_stderr
option has been set to false.
Change-Id: I2f052dea9f1fe3de8328c2674399153349f7cba2
Closes-Bug: 1588051
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
All rabbitmq connection vars are now namespaced. Namespace support
was previously inconsistent which limited deployer override options.
Change-Id: I2593239e98aa3ebd578d030206299d4868d036ca
Implements: blueprint multi-rabbitmq-clusters
This change updates the os_designate role to support Ubuntu 16.04
and systemd in addition to Ubuntu 14.04 and upstart. Changes are
patterned on those made in the os_glance role.
Change-Id: I49b6271a2046b322b9ba57703331ad49aba1bc9d
Implements: blueprint support-ubuntu-1604
Remove all tasks and variables related to toggling between installation
of designate inside or outside of a Python virtual environment.
Installing within a venv is now the only supported deployment.
Additionally, a few changes have been made to make the creation of the
venv more resistant to interruptions during a run of the role.
* unarchiving a pre-built venv will now also occur when the venv
directory is created, not only after being downloaded
* virtualenv-tools is run against both pre-built and non pre-built venvs
to account for interruptions during or prior to unarchiving
Change-Id: If3f0cb96d0ac670f6c53243283d6726067cba011
Implements: blueprint only-install-venvs