Several config file permissions are incorrect on the host. In general,
files should be 0660, and directories and executables 0770.
Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
Closes-Bug: #1821579
Since Id724b44a3edd951fa8b06c9f2c347e9ed8c5ffd9, there is a reference to a
non-existent variable, rabbitmq_confs, that causes deployment to fail if
rabbitmq configuration other than config.json is changed.
I'm taking this opportunity to simplify the role, since we can use the Ansible
handler notification system to determine when handlers need to run, without
registering and checking variables. This simpler approach was used in the
haproxy refactor.
Change-Id: Ibe0e7fda93afff741243ff9c350db1c8c6e1e6d3
Closes-Bug: #1816053
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
In order to migrate to the latest release of rabbitmq (3.7), we need to
first remove this deprecated plugin which is no longer supported (the
problems it solved are now addressed in rabbitmq itself).
This avoids a circular dependency in CI where the new images depend on
the new clustering and the new clustering depends on the new images.
Change-Id: I921459f3e40b9e0d4af9497384e49aabf0abe79b
This commit is the final commit to apply resource-constraints
to all OpenStack services.
Depends-on: I39004f54281f97d53dfa4b1dbcf248650ad6f186
Change-Id: I072d69be9698be54775cb0ae286ea2b6ed78776c
Implements: blueprint resource-constraints
Add become to all tasks that use the module "kolla_docker"
Change-Id: I4309c4011687b88ec31d739fd8f834fe2326ff10
Partial-Implements: blueprint ansible-specific-task-become
Using rabbitmq service defined in default when
boot rabbitmq_bootstrap.
Not a bug here, just an enhancement.
Change-Id: I79f0f7efe3308ed4eb898b85a6370be1bd637d9a
- rename action and serial to kolla_ansible and kolla_serial
- use become instead of "sudo <command>" in shell
- Remove quota for failed_when and changed_when in rabbitmq tasks
Change-Id: I78cb60168aaa40bb6439198283546b7faf33917c
Implements: blueprint migrate-to-ansible-2-2-0
This patchset implements yamllint test to all *.yml
files.
Also fixes syntax errors to make jobs to pass.
Change-Id: I3186adf9835b4d0cada272d156b17d1bc9c2b799
When a RabbitMQ node in multiple RabbitMQ nodes is started during
multinode deployment, it is required to communicate with each other
to be clustered. However, RabbitMQ nodes cannot communicate between
them due to missing host name in the nodename environment variable
of RabbitMQ. Subsequently, all of RabbitMQs cannot be started and
it will give rise to a deployment failure.
Change-Id: I7b4ba76807750db4a14d859454ba650bdaaf23ca
Signed-off-by: Taeha Kim <kthguru@gmail.com>
As an operator I want to be able to monitor the status
of RabbitMQ by collecting metrics such as queue length,
message rates (globally and per channel), and information
about resource usage on the host, such as memory use,
open file descriptors and the state of the cluster. Whilst
it is possible to gather all of this information using
the OpenStack RabbitMQ user configured by Kolla Ansible,
this user has write access to the OpenStack vhost. This
feature adds a monitoring user which has access to all of
the information described above, but does not have write
access. An example of a service which may use the
monitoring user is the RabbitMQ plugin for the Monasca
Agent. As not all users will configure monitoring, by
default the monitoring user is disabled. To create it,
the user should override the rabbitmq_monitoring_user
variable.
Implements: blueprint add-monitoring-user-for-rabbit
Change-Id: Ie895ddc59dda1c38faab6305163d9bed6710ff9d
Add config_owner_user and config_owner_group to group_vars/all,
which is user and group of Kolla configuration files in /etc/kolla.
Add become to post-deploy playbook.
Add become to only neccesary tasks in roles:
- certificate
- common
- destroy
- haproxy
- mariadb
- memcached
- rabbitmq
Change-Id: I2aba745a6e3928c52642f64551470fd08cbfd058
Partial-Implements: blueprint ansible-specific-task-become
Copy the patterns from the rabbit checks, skip some pre-checks when the
container has already been started. Without this change the pre-checks
fail when you re-run the deploy, i.e. the port is not free because
rabbit is already running on that port.
This bug was triggered because murano is enabled, and this change has
been added to add the extra rabbitmq instance by default:
d8fe3ea780c188b6e937ab6f08a8475d2330a9fa
Closes-Bug: #1715135
Change-Id: I0eb8785e7cd4eadfa792ea14a27f54a891b2bf02
Upgrade fails as outward_rabbitmq does not exist
and cannot therefore be upgraded. Omit it from the
upgrade check and bootstrap it after rabbitmq upgrade.
Remove jinja2 from 'Find gospel node' task; removes warnings.
Change-Id: I3766271c62779c8dbd31e7cf2300473815bbbe68
kolla-kubernetes is using its own configuration generation[0], so it is
time for kolla-ansible to remove the related code to simplify the
logical.
[0] https://github.com/openstack/kolla-kubernetes/tree/master/ansible
Change-Id: I7bb0b7fe3b8eea906613e936d5e9d19f4f2e80bb
Implements: blueprint clean-k8s-config
Certain services such as Murano and trove require access to a rabbitmq
instance from tenant networks. [0]
Exposing the internal rabbitmq to end users is a security hole, hence
there are two options, 1) use vhosts in the existing rabbitmq, or two a
separate rabbitmq instances. Given the importance of rabbitmq to the
OpenStack deployment, we have decided to go with a separate instance.
Refer to [1] for more detail on the various options.
This change makes the rabbitmq role generic so that it can be reused, in
this case to start 'outward_rabbitmq'. It needs to be exposed via
haproxy both for network isolation and also because this is what Murano
configuration requires.
Follow on patches will be added to add a vhost in this outward instance
for Murano and other services which require access.
Based on the original work by bdaca[2]
[0] http://murano.readthedocs.io/en/stable-liberty/intro/architecture.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109091.html
[2] https://review.openstack.org/#/c/374525
Change-Id: Ib2bcc7ed4bf4f883a7cd1dfad3db89201e3cfd8d
Partial-Bug: #1620374
Depends-On: I020eb6219f89a310451becde41f6f1c7f54baadd
Co-Authored-By: Bartłomiej Daca <bartek.daca@gmail.com>
In Ansible 2.3.0 when statements should not include jinja2 templating
delimiters such as {{ }} or {% %}, and gate is broken with Ansible 2.3.1.
This patchset rewrite when statement in rabbitmq precheck task to not use
string interpolation.
Change-Id: Ie2f1666cc8ced7cf20ceba40c7c7aaec750778f9
Closes-Bug: #1695111
wait_for module waits 300 seconds for the port started or stopped. This
is meaningless and useless in precheck. This patch change timeout to 1
seconds.
Change-Id: I9b251ec4ba17ce446655917e8ef5e152ef947298
Closes-Bug: #1688152
Add a new subcommand 'check' to kolla-ansible, used to run the
smoke/sanity checks.
Add stub files to all services that don't currently have checks.
Change-Id: I9f661c5fc51fd5b9b266f23f6c524884613dee48
Partially-implements: blueprint sanity-check-container
During the upgrade from Mitaka to Newton, the uid/gid may change for the
same image. Especially on Ubuntu, we moved to Ubuntu Xenial in Newton
and it added systemd related user which break all the uid/gid during an
upgrade. It will the permissions in all docker named volumes.
This fix extends set_config.py to set the proper permission during
container start. This is super light then add commands in
extend_start.sh file or add ansible tasks.
This patch just fixes rabbitmq case. Other services will be fixed in
following patches.
Partial-Bug: #1631503
Change-Id: Ib17027b97abbc9bf4e3cd503601b8010325b5c5b
do_reconfigure.yml is introduced to use serial directive. But we use
it in wrong. Now serial has moved to playbook file. So it is time to
remove the do_reconfigure.yml file
Closes-Bug: #1628152
Change-Id: I8d42d27e6bc302a0e575b0353956eaef9b2ca9fd
This issue still exsits when disable ipv6 feature.
This reverts commit 5480bd9b1d3a9efcd3618ddf12718d2621ceeb47.
Change-Id: I1e6c6bff5585cf5a49890668203d6971112c32f1
Useful for upgrade etc., which is preferablly done serially.
Example usage: tools/kolla-ansible deploy OR tools/kolla-ansible upgrade
Closes-Bug: #1576708
DocImpact
Change-Id: I34b2e16f8ce53e472a4682a4738c4ac0f5abf00c
rabbitmq's start task contains a precheck. This should be part of the
other prechecks for consistency
TrivialFix
Change-Id: I7728ec3f5be3248424d74a4387925b72114b8943
enable_rabbitmq_cluster is now a "yes" by default but you can set it
to "no" if you want to disable clustering under any circumstances.
The agreement made at OpenStack in Austin was that Kolla-Kubernetes
would concentrate on RabbitMQ and MariaDB without clustering but
with persistent storage and workload migration, then examine how to
do proper distributed functionality as the project progresses, so I
am just following what we'd already agreed upon.
First, it helps us deal with issues of version upgrades without
dealing with clustered version upgrades and the synchronization
thereof.
Second, it provides an alternative model for durability when used in
Kubernetes. Understand that, if we disable RabbitMQ's clustering,
Kubernetes is still able to re-schedule the queue off of a failed node
in ways that Kolla-Ansible is not. There are known issues with
RabbitMQ clustering, especially with auto-heal turned on. For many
small-to-mid-sized clusters, it's going to provide for a better
operator experience to have the known potential for a 30 second blip
after RabbitMQ node failure than it is to have the known potential
for partition and data loss and/or manual operations after you've
turned off auto-heal.
Kolla-kubernetes has already turned off host networking for the
RabbitMQ pod; it's safe to set the interface address in the
Kubernetes context.
The question was asked why don't I just set the RabbitMQ cluster to be
a single instance. It's unlikely that Kubernetes RabbitMQ with a
PetSet will be clustered in the same declaritive fashion as the
rabbitmq-clusterer plugin. Easier to just disable it and worry about
how to configure the kube-friendly clustered RabbitMQ at a later point
in time. Furthermore, it's an entirely valid case for many OpenStack
control planes hosted atop Kolla-Kubernetes to accept the possibility
of a 30-60 second blip in lieu of the long and questionable history
of RabbitMQ clustering in production.
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Change-Id: I7f0cb22d29a418fce4af8d69f63739859173d746
Partially-implements: blueprint api-interface-bind-address-override
Trying to use ConfigMap's in Kubernetes leads to an interesting
problem. We use the file name as the key and the contents of the
file as the text value. The ConfigMap is mounted on the container
as a volume and the key is then used as the name of the file. The
problem is that kubernetes has a limitation on the name of the
key
https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/identifiers.md
Which means we cannot use '_' in the name of the file.
Closes-Bug: #1581162
Change-Id: I2d9ec80f989c30893b019954fe18b3623d27a076