Currently, policy.json is put in
"{{ node_config_directory }}/{{ service_name }}"
in target nodes.
Relocation policy.json to "{{ node_config_directory }}/{{ item }}"
with item is corresponding service compoment config directory.
Currently, the policy.json is copied to all services, but it
should be reviewed and left only in neccesary service
(at many cases, only API service needs that).
Redundant files will be removed in follow up patchset.
Change-Id: I0e997dccf4ec438c9c0436db71ec2fd06650f50d
Closes-Bug: #1639686
* Add api_workers option in murano group.
* engine workers moved from workers to engine_workers option in engine
group.
Change-Id: I746a4e3c69acfd809e167e14a30cc8ed6b0512fb
Closes-Bug: #1638793
Allow operators to use their custom policy files.
Avoid maintain policy files in kolla repos, only copying
the files when an operator add their custom config.
Implements: blueprint custom-policies
Change-Id: Icf3c961b87cbc7a1f1dd2ffbfffcf271d151d862
Murano sometimes fails to deploy due to delay in haproxy
identifying started API services. Instead of waiting for all the
services and checking existance of core library via VIP deploy
task should connect to one of the services directly for testing
service status and for package import.
Change-Id: I2934579edc910e81730dd89dbd8ff9eb11a2cc1c
Closes-Bug: #1634531
do_reconfigure.yml is introduced to use serial directive. But we use
it in wrong. Now serial has moved to playbook file. So it is time to
remove the do_reconfigure.yml file
Closes-Bug: #1628152
Change-Id: I8d42d27e6bc302a0e575b0353956eaef9b2ca9fd
Useful for upgrade etc., which is preferablly done serially.
Example usage: tools/kolla-ansible deploy OR tools/kolla-ansible upgrade
Closes-Bug: #1576708
DocImpact
Change-Id: I34b2e16f8ce53e472a4682a4738c4ac0f5abf00c
In order for Murano to be operational the core library package must be
imported [0]
Add Ansible tasks to do this idempotently.
[0] http://docs.openstack.org/developer/murano/install/manual.html
TrivialFix
Change-Id: I2c49e9d663595650b885267839012b543505337a
Notification driver should be configured to avoid timeout failures of
murano app deployments while waiting notifications which will never be
sent.
The required driver is "messagingv2".
TrivialFix
Change-Id: Id0c753f50d93c81eedb2455a7323d86c08873c5f
Migrate to full variable syntax in with_ loop
instead of bare variables for:
- cinder
- haproxy
- ironic
- magnum
- mistral
- mongodb
- murano
- swift
- watcher
TrivialFix
Change-Id: I3ef2e79053cf609aaa710e43ffd0adbc5a97565b
Use a lower number of workers rather than the default value, which is
equal to the number of the cpu. Otherwise, in a multi cpu environment,
the number of the processes will very high.
In this PS, we use min(5, << number of cpu >>) as the default worker
count.
Closes-Bug: #1582254
Change-Id: I1c32cf0db794b43b8fb8be18f39190422ca5846f
Changed the order of the 'when' statements in "remove/restart
containers" tasks. It will fix the reconfiguration problem when
deploying different components on different hosts.
Change-Id: Ibee9dd56b6128b664144deb1d9eb7ec32e39fd5c
Closes-Bug: #1603943
An operator may want to specify the location of custom config
files so that kolla can detect their location and merge
them with the default configs generated.
Partially implements: blueprint multi-project-config
Change-Id: Ibfb38d07a36dfa7fe25381adc34cc1d3cbe7d1e1
This change makes each step of the kolla deployment aware
of the port database was configured to listen on.
It defaults mariadb_port to database_port.
Change-Id: I8e85d5732015afc0a5481cb33e0b629fdfa84a1b
Closes-Bug: #1576151
DocImpact
In Mitaka, the service name must have a dash rather than underscore when using
the sql catalog driver in Keystone[0] (the default).
This works for upgrade also, though further improvements could be written to
remove the old endpoint from Keystone, and automatically chose a dash or
underscore based on driver type used.
[0] http://docs.openstack.org/releasenotes/murano/mitaka.html#upgrade-notes
Change-Id: I15a03370afdad6529eec51a206b6134bf80b283d
Closes-Bug: 1576152
Make sure that all the sevices will attempt to
connect to the database an infinite about of times.
If the database ever disappears for some reason we
want the services to try and reconnect more than just
10 times.
Closes-bug: #1505636
Change-Id: I77abbf72ce5bfd68faa451bb9a72bd2544963f4b
The in-process cache for keystone tokens has been deprecated due to
"incosistent results and high memory usage" with the expectation we
switch to memcached_servers if we want to stay performant.
Add memcache_servers [cache] section to the appropriate servers as the
[DEFAULT]\memcache_servers options was deprecated.
TrivialFix
Related-Id: Ied2b88c8cefe5655a88d0c2f334de04e588fa75a
Change-Id: Ic971bdddc0be3338b15924f7cc0f97d4a3ad2440
This type of per node configuration is required to support things like
availability zones for nova. As always, if this file doesnt exist it
doesnt get used so this change is safe.
TrivialFix
Change-Id: Iff8172af522c2c96e5f2c173b24a5dfd4d522ed2
After our switch to keystone-manage bootstrap Horizon is not happy
due to v3 not being setup correctly. This patch fixes that
This also includes removal of unused variables (transforms them into
endpoint url variables)
TrivialFix
Change-Id: I1e04db8c24049f80e974c063f03068a2ab32a563
Due to poor planning on our variable names we have a situation where
we have "internal_address" which must be a VIP, but "external_address"
which should be a DNS name. Now with two vips "external_vip_address"
is a new variable.
This corrects that issue by deprecating kolla_internal_address and
replacing it with 4 nicely named variables.
kolla_internal_vip_address
kolla_internal_fqdn
kolla_external_vip_address
kolla_external_fqdn
The default behaviour will remain the same, and the way the variable
inheritance is setup the kolla_internal_address variable can still be
set in globals.yml and propogate out to these 4 new variables like it
normally would, but all reference to kolla_internal_address has been
completely removed.
Change-Id: I4556dcdbf4d91a8d2751981ef9c64bad44a719e5
Partially-Implements: blueprint ssl-kolla
The extend_start.sh script for rsyslog is removed as it is no longer
needed. Docker no longer binds to /dev/log or /run/kolla/log
Closes-Bug: #1544545
Change-Id: Ic0a323a26ee4e9e15baf4598285844a8a4955f23
To allow for TLS to protect the service endpoints, the protocol
in the URLs for the endpoints will be either http or https.
This patch removes the hardcoded values of http and replaces them
with variables that can be adjusted accordingly in future patches.
Change-Id: Ibca6f8aac09c65115d1ac9957410e7f81ac7671e
Partially-implements: blueprint ssl-kolla
Docker 1.10 has broken the gate and this patch will correct that
breakage.
The issue comes with rsyslog. Due to a commit in Docker 1.10 [1] we
must change the way we get the log socket for rsyslog. The /dev/
folder will no longer populate as we used it. So instead we simply
make a new socket in a path we control and share that to the correct
location in the containers.
Additionally, adjust the gate for new Docker daemon.
[1] https://github.com/docker/docker/pull/16639
Partially-Implements: blueprint kolla-upgrade
Change-Id: I881a2ecdf6d7b35991e1d38a3f3e60d022d6577f
This change is needed for clarity. We have a kolla-ansible script.
We have a kolla-mesos repo. We plan to have a kolla-ansible repo.
Already we have had far too much confusion about whether we are
talking about the container or the project. Naming this kolla-toolbox
eliminates all of that confusion and its probably a bit more accurate
of a name too.
Closes-Bug: #1541053
Change-Id: I8fd1f49d5a22b36ede5b10f46b9fe02ddda9007e
Add bootstrap label to all bootstrap containers to ensure that when
the a new container is launched a difference is seen between it and
the bootstrap container since we cannot rely on ENV variables for
this. This only affects mariadb at this stage, but it is needed to
ensure rabbitmq works when we switch to named volumes.
Change-Id: Ia022af26212d2e5445c06149848831037a508407
Closes-Bug: #1538136