Sometimes the keyserver has a mirror failure which results in a failed
gate. Add a retry to help prevent that failure.
TrivialFix
Change-Id: I143626dd6d799b4ea0f82f6649d2155c2f45a115
Docker 1.10 has broken the gate and this patch will correct that
breakage.
The issue comes with rsyslog. Due to a commit in Docker 1.10 [1] we
must change the way we get the log socket for rsyslog. The /dev/
folder will no longer populate as we used it. So instead we simply
make a new socket in a path we control and share that to the correct
location in the containers.
Additionally, adjust the gate for new Docker daemon.
[1] https://github.com/docker/docker/pull/16639
Partially-Implements: blueprint kolla-upgrade
Change-Id: I881a2ecdf6d7b35991e1d38a3f3e60d022d6577f
Remove the '..' in the path like:
/home/jenkins/workspace/gate-kolla-dsvm-deploy-centos-binary/tools/../ansible/inventory/all-in-one
TrivialFix
Change-Id: I80724a9e876ed1826c65e08b55cfa08124d70eb9
Ubuntu-bootstrap.sh script rebooted my server
with message of re-run the script and at re-run
it again do the same thing.
I find this behavior is due to the below check
if [[ $(uname -r) != *"3.19"* ]]
As latest ubuntu kernel version is 4.2.0-27-generic,
so we should update the script for kernel version
4.2.0-27-generic.
This patch fixes the issue.
Closes-Bug: #1541797
Change-Id: I01e98d80df60fe8c5f6ac6e644d42261fdd2921c
Libvirt stores some information in /run at runtime that is needed to
automatically reestablish a connection with the VM when a new
container is created. Without this information a long (and manual)
process is needed to redefine the running vms and reattach to the
running qemu process.
This mountpoint was removed as "unneeded" in the past, but it does
exist in Liberty branch enabling a no-vm-downtime upgrade.
TrivialFix
Change-Id: I2eb31c602d8d17cbd6a8e405daf4123070794843
The install type is converted in kolla-build, so it will never
fail in dockerfile, move the check to kolla-build just above
the install type converting.
TrivialFix
Co-Authored-By: Jeffrey Zhang <jeffrey.zhang@99cloud.net>
Change-Id: I1500d3b47e909f94ea9f68c5245297733f63a70b
This change is needed for clarity. We have a kolla-ansible script.
We have a kolla-mesos repo. We plan to have a kolla-ansible repo.
Already we have had far too much confusion about whether we are
talking about the container or the project. Naming this kolla-toolbox
eliminates all of that confusion and its probably a bit more accurate
of a name too.
Closes-Bug: #1541053
Change-Id: I8fd1f49d5a22b36ede5b10f46b9fe02ddda9007e
Container maybe exit after deployed. But the print_failure
never runs if the kolla-ansible run success.
This PS checks all the containers status after deploy and
failed the test if the container status is exited
TrivialFix
Change-Id: Ia461b280855eda500e143ee1d6cfd5f215eaf6fe
Current Swift playbook is based on the preassumption of AIO setup.
However, if one goes with default multinode setup
(ansible/inventory/multinode), it follows the P + ACO deployment model,
which proxy-server runs on controller nodes where ACO
(account/container/object services) run on storage nodes.
It will break because swift proxy-server no longer has access (it
shouldn't have) to /srv/node path. This change ensure disk mounting part
only happens on storage node. It also moves chown from proxy-server
Dockerfile to rsyncd because no matter with PACO, P+ACO or P+A+C+O
model, rsyncd is always running on each storage node.
Change-Id: I3aa20454902caa9c84d3901bb91e4e4c93ac5f34
Partially-Implements: blueprint swift-physical-disk
Closes-Bug: #1537544