In multi controller deployment, kolla will generate
"controller_ip_port_list option" in [health_manager] section with
ONLY IP of that node instead of a list of controller ip.
Therefor, "amphora-agent.conf" file of amphora instance will
contain IP of ONLY ONE controller node.
In case of that node fail, amphora agent won't send heartbeat
message to other health manager node, and the loadbalancer will
go to ERROR state.
Change-Id: I102ed6ba3fff2c12cc6d37f81ad59508eacc859c
Co-Authored-By: Hieu LE <hieulq2@viettel.com.vn>
Update the template so that if 'dns_interface' is set, named listens on
this interface as well as the 'api_interface'.
Change-Id: I986ca46e5599e4767800fcc7f34a1c6e682efb55
Closes-Bug: 1808829
Kolla Ansible's bootstrap-servers command provides support for
installing the Docker engine. This is currently done using the packages
at https://apt.dockerproject.org and https://yum.dockerproject.org.
These packages are outdated, with the most recent packages from May 2017
- docker-engine-17.05.
The source for up to date docker packages is
https://download.docker.com, which was introduced with the move to
Docker Community Edition (CE) and Docker Enterprise Edition (EE).
This change adds support to bootstrap-servers for Docker CE for CentOS
and Ubuntu.
It also adds a new variable, 'enable_docker_repo', which controls
whether a package repository for Docker will be enabled.
It also adds a new variable, 'docker_legacy_packages', which controls
whether the legacy packages at dockerproject.org will be used or the
newer packages at docker.com. The default value for this variable is
'false', meaning to use Docker CE.
Upgrading from docker-engine to docker-ce has been tested on CentOS 7.5
and Ubuntu 16.04, by running 'kolla-ansible bootstrap-servers' with
'docker_legacy_packages' set to 'false'. The upgrades were successful,
but result in all containers being stopped. For this reason, the
bootstrap-servers command checks running containers prior to upgrading
packages, and ensures they are running after the package upgrade is
complete.
As mentioned in the release note, care should be taken when upgrading
Docker with clustered services, which could lose quorum. To avoid this,
use --serial or --limit to apply the change in batches.
Change-Id: I6dfd375c868870f8646ef1a8f02c70812e8f6271
Implements: blueprint docker-ce
Add an enable_cinder_backend_quobyte option to etc/kolla/globals.yml to
enable use the Quobyte Cinder backend.
Change the bind mounts for /var/lib/nova/mnt to include the shared
propogation if Quobyte is enabled.
Update the documentation to include a section on configuring the Cinder.
Implements: blueprint cinder-quobyte-backend
Change-Id: I364939407ad244fe81cea40f880effdbcaa8a20d
According [1], vitrage notification has to be configured in Nova,
Neutron, Cinder & Aodh config file.
[1] https://review.openstack.org/#/c/302802/
Change-Id: Iaf8cd7d40e6eb988adf4d208e6ad784f1004caa5
Find module searches paths on managed server. Since role path and custom
Kolla config is located on deployment node and deployment node is not
considered to be a managed server, Monasca plugin files cannot be found.
After the deployment container running Monasca agent collector stucks in
restart mode due to missing plugin files.
The problem does not occur if deployment was started from a managed
server (eg. OSC). The problem occurs if the deployment was started from
a separate deployment server - a common case.
This change enforces running find module locally on deployment node.
Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
In multinode deployments creating default Grafana organization failed,
because Ansible attempted to call Grafana API in the context of each
host in the inventory. After creating organization via the first host,
subsequent attempts via the remaining hosts failed due to already
existing organization. This change enforces creating default
organization only once.
Other tasks using Grafana API have been enforced to be ran only once as
well.
Change-Id: I3a93a719b3c9b4e55ab226d3b22d571d9a0f489d
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
Nova services may reasonably expect cell databases to exist when they
start. The current cell setup tasks in kolla run after the nova
containers have started, meaning that cells may or may not exist in the
database when they start, depending on timing. In particular, we are
seeing issues in kolla CI currently with jobs timing out waiting for
nova compute services to start. The following error is seen in the nova
logs of these jobs, which may or may not be relevant:
No cells are configured, unable to continue
This change creates the cell0 and cell1 databases prior to starting nova
services.
In order to do this, we must create new containers in which to run the
nova-manage commands, because the nova-api container may not yet exist.
This required adding support to the kolla_docker module for specifying a
command for the container to run that overrides the image's command.
We also add the standard output and error to the module's result when a
non-detached container is run. A secondary benefit of this is that the
output of bootstrap containers is now displayed in the Ansible output if
the bootstrapping command fails, which will help with debugging.
Change-Id: I2c1e991064f9f588f398ccbabda94f69dc285e61
Closes-Bug: #1808575
xtrabackup doesnt work with mariadb 10.3,
need to be changed to mariadb-backup tool.
For now only migrate galera, not kolla-backup tool
to fix the CI.
https://jira.mariadb.org/browse/MDEV-15774
Change-Id: Ie77ae41e419873feed4b036a307887b22455183b
Depends-On: Icefe3a77fb12d57c869521000d458e3f58435374
when using ceilometer+gnocchi, for every notification sample, ceilometer
will update the resource even if is not updated.
We should add [cache] section to make ceilometer cache the resource, and
stop send the useless update request.
Closes-Bug: #1807841
Change-Id: Ic33b4cd5ba8165c20878cab068f38a3948c9d31d
Vitrage has already supported Prometheus as
datasource. Kolla can config it automatically,
just need a little changes, for example in
wsgi config file [1].
Co-Authored-By: Hieu LE <hieulq2@viettel.com.vn>
[1] https://review.openstack.org/#/c/584649/8/devstack/apache-vitrage.template
Change-Id: I64028a0dfd9887813b980a31c30c2c1b1046da61
This change adds support to comfigure tty,
it was enabled by default but a recent patch
removed it. Some services such as Karaf in opendaylight
requires a TTY during startup.
Closes-Bug: #1806662
Change-Id: Ia4335523b727d0e45505cbb1efb40ccf04c27db7
When using external ceph, enable_ceph=no and glance_backend_ceph=yes,
glance.conf should enable rbd store.
Change-Id: Ia09cd57c829b00f28674cddf44fb55583e193d0f
Remove mode "0660" because mode it's not a supported parameters for kolla_docker
Change-Id: I1e3d690eb3cb5d61b1c88f6da2f9b10e2c5f3603
Closes-Bug: #1804702
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
blueprint database-backup-recovery
Introduce a new option, mariadb_backup, which takes a backup of all
databases hosted in MariaDB.
Backups are performed using XtraBackup, the output of which is saved to
a dedicated Docker volume on the target host (which defaults to the
first node in the MariaDB cluster).
It supports either full (the default) or incremental backups.
Change-Id: Ied224c0d19b8734aa72092aaddd530155999dbc3
Glance cache is used to keep a locally cache image
in the glance_api service.
Is an usefull service when an image is commonly used
to speed times between pulling from storage backend
and send to nova.
Change-Id: I8e684cc10e4fee1cb52c17a126e3b11f69576cf6
The configfs kernel module is not mounted by default in Ubuntu 16.04,
leading to the iscsid container failing to start because it bind mounts
/sys/kernel/config. The issue does not apply to Ubuntu 18.04, or other
distros (AFAIK), which load configfs by default.
This change loads the configfs module when the iscsid container is in
use.
Change-Id: I5b521ddca24b919658d2664ede2d878507d6d106
Closes-Bug: #1631072
The dnsmasq PXE filter [1] provides far better scalability than the
iptables filter typically used. Inspector manages files in a dhcp-hostsdir
directory that is watched by dnsmasq via inotify. Dnsmasq then either
whitelists or blacklists MAC addresses based on the contents of these
files.
This change adds a new variable, ironic_inspector_pxe_filter, that can
be used to configure the PXE filter for ironic inspector. Currently
supported values are 'iptables' and 'dnsmasq', with 'iptables' being the
default for backwards compatibility.
[1]
https://docs.openstack.org/ironic-inspector/latest/admin/dnsmasq-pxe-filter.html
Implements: blueprint ironic-inspector-dnsmasq-pxe-filter
Change-Id: I73cae9c33b49972342cf1984372a5c784df5cbc2
OpenDaylight logs have different format than openstack,
is a karaf log with java error traces.
This PS add required config to make fluentd parse properly
ODL logs.
Change-Id: I34fb96c8a424679b3b618f2ff6a840b8dc165bec
At the moment the "databases user and setting permissions" task for
designate and nova leaks the database_password because of the use
of with_items:
---snip---
TASK [nova : Creating Nova databases user and setting permissions] *********************************************************
ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova', u'database_username': u'nova'})
ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova_cell0', u'database_username': u'nova'})
ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova_api', u'database_username': u'nova_api'})
---snap---
Change-Id: I141e4153223c8772c82a31d81e58057ce266c0b9
Co-authored-by: Bernd Müller <mueller@b1-systems.de>