This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
Type changed in:
20d159dc6f
We need to update it otherwise we get a Puppet error.
Change-Id: If03b7363295f1f529b7acf4a008ff63da8fef173
Closes-Bug: #1723665
We no longer need to force low-level TCP timeouts for dead client
detection, but should continue tuning the timeout for dead peer
detection between cluster nodes. Using the erlang net_ticktime option
is preferrable here.
Closes-Bug: 1717006
Change-Id: Ibd29c03bd69818d79396c379a2d638c018a04b82
The pause_minority strategy tends to cause more problems than it
solves. If a partition is brief enough that no nodes are fenced, the
pausing and unpausing of minority nodes (especially during a partial
partition) frequently causes rabbitmq to crash in odd ways consistent
with race conditions.
By ignoring partitions, we will tolerate brief partitions better.
Longer partitions will be handled via fencing, which does not suffer
from race conditions when pausing/unpausing nodes.
Change-Id: Icb05c6b95a207c4ef818fb90fa9a2c041a5e85cf
This will be used for the replication traffic as specified in the
dependent commit.
bp tls-via-certmonger
Change-Id: Ia53b9edaa6c6cdd48bcdde64969ae6c16f57ae41
Depends-On: I265c89cb8898a6da78a606664a22c50f5e57a847
They should be integers as specified in the parameter definition
of the class. Else it'll fail.
Change-Id: I06b6e46c0722516e28e8bff4d481fb4b7a08bd61
Closes-Bug: #1713659
This should be greater than the default value of
corosync_token_timeout, which is 10 seconds. That way, if an entire
cluster node is unavailable, appropriate fencing measures can occur.
With the current settings, it is possible for brief network
interruptions, greater than 5 seconds, but less than 10 seconds, to
occur. This can cause the RabbitMQ cluster to fail in subtle ways,
but no corrective action taken by pacemaker.
Change-Id: I735d43616c5c623c4398d924713012f595b2e5f9
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
In change Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 we switched the
rabbitmq queues HA mode from ha-all to ha-exactly. While this gives us a
nice performance boost with rabbitmq, it makes rabbit less resilient to
network glitches as we painfully found out via
https://bugzilla.redhat.com/show_bug.cgi?id=1441635.
This is the THT part of the change that changes the default to
ha-mode: all.
Closes-Bug: #1686337
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Change-Id: I7afcf2b3c8deb13fc2134e4cae9c06a44e775384
Depends-On: I9a90e71094b8d8d58b5be0a45a2979701b0ac21c
Usually a nested stack is used that contains the TLS-everywhere bits
(config_settings and metadata_settings). Nested stacks are very
resource intensive. So, instead of doing using nested stacks, this patch
changes that to use a conditional, and output the necessary
config_settings and metadata_settings this way in an attempt to save
resources.
Change-Id: Ic25f84a81aefef91b3ab8db2bc864853ee82c8aa
As with other services, this passes the necessary hieradata to enable
TLS for RabbitMQ. This will mean (once we set it via puppet-tripleo)
that there will only be TLS connections, as the ssl_only option is being
used.
bp tls-via-certmonger
Change-Id: I960bf747cd5e3040f99b28e2fc5873ca3a7472b5
Depends-On: Ic2a7f877745a0a490ddc9315123bd1180b03c514
When deploying with EnablePackageInstall:True, the rabbitmq puppet
module defaults to the rpm package provider, which then tries to "rpm -i
undef" since we are setting rabbitmq::package_source to undef. Instead
of using the rpm provider at all, we should just use the yum provider to
install whatever rabbitmq rpm's are found in enabled repos.
Change-Id: I29365e675bfde676fde7a54dfc6c660c3970f50a
Partially-implements: blueprint split-stack-software-configuration
With this change we export ERL_EPMD_ADDRESS set to the
address rabbitmq is listening too. We need to explicitely
export it so that epmd can pick it up and bind to the address.
Closes-Bug: #1645898
Change-Id: Iacb2ee262da419f61ec3511f42b395f69f5d14da
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
The RabbitMQ's puppet manifest configures the node's IP and port through
environment variables. While this would usually be fine, it doesn't
allow us to use TLS-only, since it will always try to start a TCP
listener. So, by setting these values through the config file, when
setting ssl_only for rabbitmq, they will effectively be discarded and
thus allow us to use an SSL listener on the same port.
Change-Id: I33d051a8c740baf69b99517378e1f9b0f3cc1681
This seems to have broken the updates job, causing it to fail
with following error:
Can't set long node name!\nPlease check your configuration\n
Related-Bug: 1646873
This reverts commit 3e9fcfd09320ace07bc1bd4cb57feb98cd057332.
Change-Id: I72ba891cd9cd8c4f1bc204144f46aaabbdfd3647
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
After a brand new deployment we have the following in rabbitmq.config:
...
{rabbit, [
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{tcp_listen_options, [binary, {packet, raw}, {reuseaddr, true},
{backlog, 128}, {nodelay, true}, {exit_on_close, false}, {keepalive,
true}]},
...
Let's remove these duplicate entries and make sure that we use the
parameters for the puppet module to set the following values
explicitely (it's the only parameter where we do not use the default
setting from the puppet module):
keepalive = true -> rabbitmq::tcp_keepalive: true
All the other options that we set are the default in the puppet module:
{packet, raw}
{reuseaddr, true}
{backlog, 128}{nodelay, true}
{exit_on_close, false}
Depends-On: I608477d5714a5081b3b4ab3b9fc2932bdd598301
Change-Id: I35921652bd84d1d6be0727051294983d4a0dde10
It turns out that reducing number of rabbitmq queues in cluster
significantly improves performance of cluster especially in the case of
failover recovery time. Right now the cluster uses ha-all mode for rabbitmq
queues.
It is best to change this to "ha-exactly" mode and reduce the number
of queue copies to ceil(N/2) where N is number of controllers in the
cluster - so in typical scenario of 3 controller It would be 2 by
default.
It does not make much sense to keep the copies of queues over whole
cluster since if the quorum of nodes is lost then the rest of cluster
nodes will be stopped anyway. We let the user override this with a
parameter.
I.e. for a 3 node controlplane cluster we will go from this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
To this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
According to Marin Krcmarik's testing recovery time from failure was
reduced significantly.
Partial-Bug: #1628998
Change-Id: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
It may happen that one of the controllers may become unavailable and
Queue Masters will be located on available controllers during queue
declarations. Once a lost controller will be become available masters of
newly declared queues are not placed with priority to such controller
with obviously lower number of queue masters and thus the distribution
may be unbalanced and one of the controllers may become under
significantly higher load in some circumstances of multiple fail-overs.
With rabbit 3.6.0 rabbitmq introduced a new HA feature of Queue masters
distribution - one of the strategies is min-masters, which picks the
node hosting the minimum number of masters.
One of the ways how to turn such min-masters strategy on is by adding
following into configuration file - rabbitmq.config
{rabbit,[ ..
{queue_master_locator, <<"min-masters">>},
.. ]},
Change-Id: I61bcab0e93027282b62f2a97bec87cbb0a6e6551
Closes-Bug: #1629010
Currently in puppet/services/rabbitmq.yaml we hardcode the thread pool
size to 30 (via the +A30 snippet):
rabbitmq_environment:
RABBITMQ_SERVER_ERL_ARGS: '"+K true +A30 +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"'
Upstream rabbit has gained the ability to dynamically configure the
number of threads since 3.6.2 via the following commit:
41ce5ad808
Given that the default was hardcoded in rabbit from at least 3.4.0 up
until 3.6.2 (see LP bug associated to this commit), we can actually
remove this hardcoded value as it overrides a sane default.
Before the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -A30 -P 1048576 ...
After the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -P 1048576 ...
So effectively with this change we will have the following:
- With older rabbitmq versions we keep the +A30 default
- With rabbitmq versions >= 3.6.2 the thread number is dynamically
computed to nr_cpus * 16
Change-Id: I8d30c7d141c29fcc439d40fc767498520be7966e
Closes-Bug: #1625486
Currently RabbitMQ cluster uses a predefined port 35672 for clustering.
This port belongs to so-called ephemeral ports range.
Ephemeral ports are the ports kernel assings to application if it
doesn't specify which port to open. So there is a small chance that this
application being started before RabbitMQ itself could grab this port.
While rather unlikely we did see this happen.
Selinux change should already be in place. On my Centos 7 we have:
rabbitmq_port_t tcp 25672
corenet_tcp_bind_rabbitmq_port(rabbitmq_t)
corenet_tcp_connect_rabbitmq_port(rabbitmq_t)
First noted via:
https://bugzilla.redhat.com/show_bug.cgi?id=1357522
Closes-Bug: #1623818
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: I995bd96c2a17614e954ea5bbae4d58998ef420dc
- adds possibility to install sensu-client on all nodes
- each composable service has it's own subscription
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Co-Authored-By: Michele Baldessari <michele@redhat.com>
Implements: blueprint tripleo-opstools-availability-monitoring
Change-Id: I6a215763fd0f0015285b3573305d18d0f56c7770
This moves the config settings out of controller.yaml for RabbitMQ
and into puppet/services/rabbitmq.yaml.
Related-Bug: #1604414
Change-Id: I6b3d71653fb91b89b85dae7df4088afff22b71ac
This patch adds a new DefaultPasswords parameter to
composable services. This is needed to help provide
access to top level password resources that overcloud.yaml
currently manages (passwords for Rabbit, Mysql, etc.).
Moving the RandomString resources into composable services
would cause them to regenerate within the stack. With this
approach we can leave them where they are while we deprecate
the top level mechanism and move the code that uses the
passwords into the composable services.
Change-Id: I4f21603c58a169a093962594e860933306879e3f
This will be needed to pick the network where the service has
to bind to from within the service template.
Change-Id: I52652e1ad8c7b360efd2c7af199e35932aaaea8c
Migrate puppet/hieradata/*.yaml parameters to puppet/services/*.yaml
except for some services that are not composable yet.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I7e5f8b18ee9aa63a1dffc6facaf88315b07d5fd7
Split out the firewall rules in puppet/hieradata/controller.yaml
into the composable services
Depends-On: Id370362ab57347b75b1ab25afda877885b047263
Change-Id: Icaecab100d3f278035fbbb3facb9bf6c62c76c03
This patch adds a new service_name section to each composable
service. We now have an explicit unit test check to ensure that
service_name exists in tools/yaml-validate.py.
This patch also wires service_names into hieradata on each
of the roles so that tools can access the deployed services locally
during deployment and upgrades.
Change-Id: I60861c5aa760534db3e314bba16a13b90ea72f0c
We now allow 65536 open file descriptors to better reflect the
real-world settings of downstream consumers of TripleO.
Change-Id: Ib04ea6afb2da1a9101839d9d70bc8891d69700ec
By passing the MysqlVirtualIP via the EndpointMap we won't need it
to be provided as a parameter to the services.
This follows what is already happening for the glance registry
service with I9186e56cd4746a60e65dc5ac12e6595ac56505f0.
Change-Id: Iad2ab389bf64d0fc8b06eb0e7d29b5370ff27dff
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change the way to implement RabbitMQ, as a composable role.
Implements: blueprint refactor-puppet-manifests
Change-Id: I5fed5c437ad492af75791a9163f99ae292f58895