When kolla-ansible bootstrap-servers is run, it executes one of the
following two tasks:
- name: Ensure node_config_directory directory exists for user kolla
file:
path: "{{ node_config_directory }}"
state: directory
recurse: true
owner: "{{ kolla_user }}"
group: "{{ kolla_group }}"
mode: "0755"
become: True
when: create_kolla_user | bool
- name: Ensure node_config_directory directory exists
file:
path: "{{ node_config_directory }}"
state: directory
recurse: true
mode: "0755"
become: True
when: not create_kolla_user | bool
On the first run, normally node_config_directory (/etc/kolla/) doesn't
exist, so it is created with kolla:kolla ownership and 0755 permissions.
If we then run 'kolla-ansible deploy', config files are created for
containers in this directory, e.g. /etc/kolla/nova-compute/. Permissions
for those files should be set according to 'config_owner_user' and
'config_owner_group'.
If at some point we again run kolla-ansible bootstrap-servers, it will
recursively set the ownership and permissions of all files in /etc/kolla
to kolla:kolla / 0755.
The solution is to change bootstrap-servers to not set the owner and
permissions recursively. It's also arguable that /etc/kolla should be
owned by 'config_owner_user' and 'config_owner_group', rather than
kolla:kolla, although that's a separate issue.
Change-Id: I24668914a9cedc94d5a6cb835648740ce9ce6e39
Closes-Bug: #1821599
After upgrading from Rocky to Stein, nova-compute services fail to start
new instances with the following error message:
Failed to allocate the network(s), not rescheduling.
Looking in the nova-compute logs, we also see this:
Neutron Reported failure on event
network-vif-plugged-60c05a0d-8758-44c9-81e4-754551567be5 for instance
32c493c4-d88c-4f14-98db-c7af64bf3324: NovaException: In shutdown, no new
events can be scheduled
During the upgrade process, we send nova containers a SIGHUP to cause
them to reload their object version state. Speaking to the nova team in
IRC, there is a known issue with this, caused by oslo.service performing
a full shutdown in response to a SIGHUP, which breaks nova-compute.
There is a patch [1] in review to address this.
The workaround employed here is to restart the nova compute service.
[1] https://review.openstack.org/#/c/641907
Change-Id: Ia4fcc558a3f62ced2d629d7a22d0bc1eb6b879f1
Closes-Bug: #1821362
Fixes a race condition where sometimes a volume would still be in the
'creating' state when trying to attach it to a server.
Invalid volume: Volume <id> status must be available or downloading to
reserve, but the current status is creating.
Change-Id: I0687ddfd78c384650cb361ff07aa64c5c3806a93
Services were being passed as a JSON list, then iterated over in the
neutron-server container's extend_start.sh script like this:
['neutron-server'
'neutron-fwaas'
'neutron-vpnaas']
I'm not actually sure why we have to specify services explicitly, it
seems liable to break if we have other plugins that need migrating.
Change-Id: Ic8ce595793cbe0772e44c041246d5af3a9471d44
RDO is packaging placement-api with bundled httpd config
and it conflicts with kolla-ansible generated one.
Change-Id: I018a4ed1b2282e8a789b63e3893e61db2fde8cf2
Reconfiguring Swift currently fails to restart containers if
configuration changes. This is because kolla_set_configs is executed in
the containers as the default swift user, which does not have permission
to access all necessary files.
This change uses the root user to execute the command instead, which
allows it to exit with the correct status of 1 if the config files
differ.
Change-Id: I2a2363c71430a7173bb5253662412ae5dba09654
Migrate to the latest Ubuntu LTS release 18.04 aka Bionic. See [0] for
the big picture.
Also test running tox jobs on Bionic.
[0] https://etherpad.openstack.org/p/devstack-bionic
Change-Id: I96e7b8d17bc1e92716c04fdcf362c2adb08a2212
All Prometheus services should use the Prometheus install type which
defaults to the Kolla install type, rather than directly using the
Kolla install type.
Change-Id: Ieaa924986dff33d4cf4a90991a8f34534cfc3468
The api_endpoint option was deprecated, and will be removed by
https://review.openstack.org/643483.
Change-Id: Ie56a8ab07ab21d2e7d678e636c1408099d8ab3aa
Fix filemode in the merge_configs and merge_yaml action plugin to
be compatible with python3
Change-Id: Ief64c5bdcd717141281e23c255a49ec02a96aef2
Closes-Bug: #1820134
Adds support to seperate Swift access and replication traffic from other storage traffic.
In a deployment where both Ceph and Swift have been deployed,
this changes adds functionalality to support optional seperation
of storage network traffic. This adds two new network interfaces
'swift_storage_interface' and 'swift_replication_interface' which maintain
backwards compatibility.
The Swift access network interface is configured via 'swift_storage_interface',
which defaults to 'storage_interface'. The Swift replication network
interface is configured via 'swift_replication_interface', which
defaults to 'swift_storage_interface'.
If a separate replication network is used, Kolla Ansible now deploys separate
replication servers for the accounts, containers and objects, that listen on
this network. In this case, these services handle only replication traffic, and
the original account-, container- and object- servers only handle storage
user requests.
Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
Commit 2f6b1c6890cf7ea6b0dd33ac219646e4dcaf1fd6 changed the way the
cephfs source path was generated and dropped the source path component,
keeping only the list of IPs and ports. This results in failures to
mount cephfs with the following message:
source mount path was not specified
failed to resolve source
Change-Id: I94d18ec064971870264ae8d0b279564f2172e548
Closes-Bug: #1819502
This patch implements the support for the elasticsearch-exporter in
kolla-ansible
The configuration and prechecks are reused from the other exporters
Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972
Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
With newer Docker versions `systemctl show docker` returns:
MountFlags=shared
Instead of:
MountFlags=1048576
This fix accepts either value as valid to ensure the check is not
erroneously failing.
Closes-Bug: #1791365
Change-Id: I2bd626466d6a0e189e0d85877b2be8f2b4bb37f4