The neutron-cleanup script was destroying the entire neutron integration
bridge including ports that are tagged to skip the cleanup process. This
adds logic to skip those ports.
Change-Id: If77933310b5602c5e0d4197584d66d929fc4d8db
Closes-Bug: #1804288
According to the inventory examples[1] openshift_master_cluster_hostname
points to an internal hostname/address set on the loadbalancer while
openshift_master_cluster_public_hostname points to the external.
This change sets openshift_master_cluster_hostname to use the InternalApi
network instead of the External network as it is at this moment.
[1] https://docs.openshift.com/container-platform/3.11/install/example_inventories.html
Change-Id: I9efab5b07682efd6b03da433801d636e7d324619
OpenDaylight's Infrautils project has a new, recommended method for
checking when ODL is up and ready. Use this new diagstatus ODL NB REST
API endpoint vs the old netvirt:1 endpoint.
ODL Jira that tracked adding diagstatus REST API:
https://jira.opendaylight.org/browse/INFRAUTILS-33
RH BZ tracking moving to diagstatus:
https://bugzilla.redhat.com/show_bug.cgi?id=1642270
Change-Id: I44dc5ba7680a9c5db2d6070e813d9b0e31d6e811
Signed-off-by: Daniel Farrell <dfarrell@redhat.com>
Change I81bc48b53068c3a5ed90266a4fd3e62bfb017835 moved image fetching
and tagging for pacemaker-managed services from step 1 to step 2. This
is also a step when the services are started, which probably
introduced a race condition for environments where pacemaker cluster
consists of more than one machine.
During the deployment you can get a lot of pcmk failures like:
failed to pull image 192.168.24.1:8787/tripleomaster/centos-binary-mariadb:pcmklatest
This only happens on non-bootstrap nodes. On bootstrap node the order
is still correct, first download and tag image, and then start the
pcmk resources. However, if non-bootstrap nodes are slower with
downloading and tagging, pacemaker there might start the resources
before the images are tagged (as the starting of resources is
controlled globally from bootstrap node).
Change-Id: Id669cc9a296a8366c7c80a5ee509bdb964b62a04
Closes-Bug: #1805826
When an update changes NovaPassword, we need to run
nova_api_ensure_default_cell container for it to be able to change
the db_connection for the cells in nova_api db. Otherwise
nova_api_discover_hosts which runs all the time fails with db erorr
as the connection string in the database would not change.
Currently, we don't mount either /var/lib/config-data/nova or
/var/lib/config-data/puppet-generated/nova, hence the
TRIPLEO_CONFIG_HASH is not generated for the container and it
does not run during update and may be not in upgrade either.
Change-Id: I0a972796e45a8df614619c95e9d9be9af183b4e5
Closes-Bug: #1805803
Patch Iaf8e13490adffaf4a606730f4758d064af69b2aa does not fix the problem
with Gnocchi temporary directory completely. Gnocchi is still not able
to create new subdirectories. This patch makes the directory permission
more easy.
Change-Id: I6b13b0c4aaaa96684b3c7b782a2a2f5ff79e7f39
Closes-Bug: #1799522
By default, Compute role template set's the deprecated_param_ips
parameter in roles data. This forces the use of the deprecated
names in paramer_defaults when using predictable IPs for the
ctlplane network.
To allow the user to either use the deprecated role name, or the
non deprecated role name in parameters defaults extend the
ctlplane_fixed_ip_set contition to use or logic to test for data
in either the deprecated name parameter or the new parameter.
In the server resource use yaql to pick the first element that
is not empty. The non-deprecated parameter name is prioritiezed.
Change-Id: Iedc65064c5efaa618c3d54df10bf09296829efd2
Closes-Bug: #1805482
When Ironic uses the 'direct' deploy interface it requires
access to swift. To access swift it needs the storage
network.
Change-Id: Ie49b961bb276dff0e5afbf82b450caa57d17f6ff
For all containers where restart=always is configured and that are not
managed by Pacemaker (this part will be handled later), we remove these
containers at step 1 of post_upgrade_tasks.
Change-Id: Id446dbf7b0a18bd1d4539856e6709d35c7cfa0f0
For the isolated networks we use the subnets host_routes
to set and get the routes for overcloud node interfaces.
This change add's this to the ctlplane interface.
Partial: blueprint tripleo-routed-networks-templates
Change-Id: Id4cf0cc17bc331ae27f8d0ef8f285050330b7be0
There is a deployment race where nova-placement fails to start if
the nova api db migration have not finished before starting it.
We start nova placement early to make sure it is up before the
nova-compute services get started. Since in HA scenario there is
no sync in between the nodes on the current worked deployment step
we might have the situation that the placement service gets started
on C1/2 when the nova api db sync is not yet finished on C0.
We have two possibilities:
1) start placement later and verify that nova-computes recover correct
2) verify that db migration on nova_api db finished before start nova-
placement on the controllers
2) which was addressed via https://review.openstack.org/610966 showed
problems:
a) the docker/podman container failed to start with some file not found
error, therefore this was reverted in https://review.openstack.org/619607
b) when the scrip were running on different controllers at the same
time, the way how nova's db_version() is implemented has issues, which
is being worked on in https://review.openstack.org/619622
This patch addresses 1) and moves placement service start to step_4
and adds an additional task on the computes to wait until the placement
service is up.
Closes-Bug: #1784155
Change-Id: Ifb5ffc4b25f5ca266560bc0ac96c73071ebd1c9f
The puppet aodh-api.yaml service uses the puppet
apache service. The apache server uses the cidr
map in ServiceData.
The docker service did not pass the ServiceData
to the puppet service template. The result is
that the properties resolved to ''.
Change-Id: I736e0fa4191fa130f882b09eb87256c62ac69143