Find module searches paths on managed server. Since role path and custom
Kolla config is located on deployment node and deployment node is not
considered to be a managed server, Monasca plugin files cannot be found.
After the deployment container running Monasca agent collector stucks in
restart mode due to missing plugin files.
The problem does not occur if deployment was started from a managed
server (eg. OSC). The problem occurs if the deployment was started from
a separate deployment server - a common case.
This change enforces running find module locally on deployment node.
Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
In multinode deployments creating default Grafana organization failed,
because Ansible attempted to call Grafana API in the context of each
host in the inventory. After creating organization via the first host,
subsequent attempts via the remaining hosts failed due to already
existing organization. This change enforces creating default
organization only once.
Other tasks using Grafana API have been enforced to be ran only once as
well.
Change-Id: I3a93a719b3c9b4e55ab226d3b22d571d9a0f489d
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
Nova services may reasonably expect cell databases to exist when they
start. The current cell setup tasks in kolla run after the nova
containers have started, meaning that cells may or may not exist in the
database when they start, depending on timing. In particular, we are
seeing issues in kolla CI currently with jobs timing out waiting for
nova compute services to start. The following error is seen in the nova
logs of these jobs, which may or may not be relevant:
No cells are configured, unable to continue
This change creates the cell0 and cell1 databases prior to starting nova
services.
In order to do this, we must create new containers in which to run the
nova-manage commands, because the nova-api container may not yet exist.
This required adding support to the kolla_docker module for specifying a
command for the container to run that overrides the image's command.
We also add the standard output and error to the module's result when a
non-detached container is run. A secondary benefit of this is that the
output of bootstrap containers is now displayed in the Ansible output if
the bootstrapping command fails, which will help with debugging.
Change-Id: I2c1e991064f9f588f398ccbabda94f69dc285e61
Closes-Bug: #1808575
xtrabackup doesnt work with mariadb 10.3,
need to be changed to mariadb-backup tool.
For now only migrate galera, not kolla-backup tool
to fix the CI.
https://jira.mariadb.org/browse/MDEV-15774
Change-Id: Ie77ae41e419873feed4b036a307887b22455183b
Depends-On: Icefe3a77fb12d57c869521000d458e3f58435374
when using ceilometer+gnocchi, for every notification sample, ceilometer
will update the resource even if is not updated.
We should add [cache] section to make ceilometer cache the resource, and
stop send the useless update request.
Closes-Bug: #1807841
Change-Id: Ic33b4cd5ba8165c20878cab068f38a3948c9d31d
Vitrage has already supported Prometheus as
datasource. Kolla can config it automatically,
just need a little changes, for example in
wsgi config file [1].
Co-Authored-By: Hieu LE <hieulq2@viettel.com.vn>
[1] https://review.openstack.org/#/c/584649/8/devstack/apache-vitrage.template
Change-Id: I64028a0dfd9887813b980a31c30c2c1b1046da61
Prior to this change, when the --limit argument is used, each host in the
limit gathers facts for every other host. This is clearly unnecessary, and
can result in up to (N-1)^2 fact gathers.
This change gathers facts for each host only once. Hosts that are not in
the limit are divided between those that are in the limit, and facts are
gathered via delegation.
This change also factors out the fact gathering logic into a separate
playbook that is imported where necessary.
Change-Id: I923df5af41a7f1b7b0142d0da185a9a0979be543
Currently, every service has a play in site.yml that is executed, and
the role is skipped if the service is disabled. This can be slow,
particularly with many hosts, since each play takes time to setup, and
evaluate.
This change creates various Ansible groups for hosts with services
enabled at the beginning of the playbook. If a service is disabled, this
new group will have no hosts, and the play for that service will be a
noop.
I have tested this on a laptop using an inventory with 12 hosts (each
pointing to my laptop via SSH), and a config file that disables every
service. Time taken to run 'kolla-ansible deploy':
Before change: 2m30s
After change: 0m14s
During development I also tried an approach using an 'include_role' task
for each service. This was not as good, taking 1m00s.
The downsides to this patch are that there is a large number of tasks at
the beginning of the playbook to perform the grouping, and every play
for a disabled service now outputs this warning message:
[WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
This is because if the service is disabled, there are no hosts in the
group. This seems like a reasonable tradeoff.
Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
This change adds support to comfigure tty,
it was enabled by default but a recent patch
removed it. Some services such as Karaf in opendaylight
requires a TTY during startup.
Closes-Bug: #1806662
Change-Id: Ia4335523b727d0e45505cbb1efb40ccf04c27db7
When using external ceph, enable_ceph=no and glance_backend_ceph=yes,
glance.conf should enable rbd store.
Change-Id: Ia09cd57c829b00f28674cddf44fb55583e193d0f
Added the missing option neutron_plugin_agent: "opendaylight" added to
the opendaylight documentation page. Without it the deployment would
not use the neutron_plugin_agent but the default one: openvswitch .
Change-Id: I56a377e1faab9a50f36383ea59b45bf5a9155bcf
When using external Ceph the operator must create pools for each service
and configure keyrings with appropriate permissions. The official Ceph
docs describe this in detail so let operators know this.
Change-Id: Ic3e52e1fbbf09ec09ac21b5b3067092b195812f1
Tested on Rocky, /v3 needs to be added to the variable auth_url to have
the trust/trustee mechanism to work. All cluster creation would fail
otherwise.
Closes-Bug: #1805896
Change-Id: Ieedac124fa22e5a7ae622c16d47d482007bbec60
We copy-paste the same play into various playbooks to detect
openstack_release. This change factors that code into a separate
playbook that is imported.
Change-Id: I5fea005642b960080bf5e43455618dc24766c386
Tested on Rocky, it seems there is no admin_* variables and some others
are missing (username/password/...) causing keystone to return http code
400 responses.
Change-Id: If4a0919bfcd6b8d8a6bfd5df9001b4967e441e7e
Closes-Bug: #1805714
From Karbor documentation, endpoints should be created with
"%(project_id)s" and not with "%(tenant_id)s".
This is very important because of this commit in Karbor which is
looking for a string "project_id".
Change-Id: I8fc640891d0d58541198cc8f2e942d8db6e8d02f
Closes-Bug: #1805705
region_id has a default value hardcoded in Karbor code equal to
"RegionOne" which could be an issue if a different region is define.
Change-Id: Ia13496156515d0f871e8fa9bd3584940a32759e9
Closes-Bug: #1798125
Remove mode "0660" because mode it's not a supported parameters for kolla_docker
Change-Id: I1e3d690eb3cb5d61b1c88f6da2f9b10e2c5f3603
Closes-Bug: #1804702