This change is needed for clarity. We have a kolla-ansible script.
We have a kolla-mesos repo. We plan to have a kolla-ansible repo.
Already we have had far too much confusion about whether we are
talking about the container or the project. Naming this kolla-toolbox
eliminates all of that confusion and its probably a bit more accurate
of a name too.
Closes-Bug: #1541053
Change-Id: I8fd1f49d5a22b36ede5b10f46b9fe02ddda9007e
Container maybe exit after deployed. But the print_failure
never runs if the kolla-ansible run success.
This PS checks all the containers status after deploy and
failed the test if the container status is exited
TrivialFix
Change-Id: Ia461b280855eda500e143ee1d6cfd5f215eaf6fe
Current Swift playbook is based on the preassumption of AIO setup.
However, if one goes with default multinode setup
(ansible/inventory/multinode), it follows the P + ACO deployment model,
which proxy-server runs on controller nodes where ACO
(account/container/object services) run on storage nodes.
It will break because swift proxy-server no longer has access (it
shouldn't have) to /srv/node path. This change ensure disk mounting part
only happens on storage node. It also moves chown from proxy-server
Dockerfile to rsyncd because no matter with PACO, P+ACO or P+A+C+O
model, rsyncd is always running on each storage node.
Change-Id: I3aa20454902caa9c84d3901bb91e4e4c93ac5f34
Partially-Implements: blueprint swift-physical-disk
Closes-Bug: #1537544
Add back missing swift proxy server image which was removed in Swift
shared image change.
TrivialFix
Change-Id: Icf13d4a1550192f73e266a6c6aa74f604ee4e77a
This change let Swift detect and use physical disk for storage. The
old named volume for storage isn't really useful for any serious setup.
Also updated swift-guide.rst accordingly.
Change-Id: I4f577b7b69d8bcd8b3961500946241c65a16db22
Partially-Implements: blueprint swift-physical-disk
Steve is tired of maintaining a copr for Magnum. People bug him
all the time to update the rpm for RDO. The RDO community
has offered to take on the maintainance of the Magnum RPM. As this
RPM won't be in current-passed-ci for some time so it needs
to be pulled from current repo for the foreseeable future,
possibly nearing the release of Mitaka.
Change-Id: I9cfb02ab828251ef5bf40ca236f18b5f0f715e34
Closes-Bug: #1539325
Add bootstrap label to all bootstrap containers to ensure that when
the a new container is launched a difference is seen between it and
the bootstrap container since we cannot rely on ENV variables for
this. This only affects mariadb at this stage, but it is needed to
ensure rabbitmq works when we switch to named volumes.
Change-Id: Ia022af26212d2e5445c06149848831037a508407
Closes-Bug: #1538136
With the switch to named volumes we run into a few situations where
we cannot bootstrap volumes like we used to. This labels param will
fix that as the next patchset shows.
Change-Id: Ia93166dd204c5c0d1a0eb9ffeb6d0aba486e269a
Partially-Implements: blueprint docker-named-volumes
There is no reason to have a hostname-unique pidfile in the container
as we currently have. This posed problems with kolla-mesos reusing
the same script. Since there is no reason for this pidfile to be
configurable in path _at_ _all_, we hardcode the path.
Additionally, we adjust the file perm change to only update the perms
on the folder if it is not already properly set.
This also incorperates a kolla-ansible file in the bootstrap process
which follows our other container techniques of using the idempotent
creation of a volume in the bootstrap process (see nova)
TrivialFix
Related-Bug: #1538136
Change-Id: I2380529fc7146a9603145cdc31e649cb8841f7dd
Since the fetch script fetched _all_ keyrings from the ceph-mon
container, the ceph-mon container must contain all keyrings. This
setup works AIO but was broken on multinode because the ceph-mon
container did not have the radosgw keyring. This issue affects every
multinode install regardless of using the radosgw or not.
TrivialFix
Change-Id: Ie416de1a5275862da6d77ef0dd174e85e499fc0f
$(hostname) is Ceph Monitor name in extend_start.sh,
{{ ansibe_hostanme }} is Ceph Monitor name in ceph.conf.
$(hostname) not always equal to ansible_hostname , that
makes ceph_mon container can't start.
Closes-Bug: #1538870
Change-Id: I312bf8d74c855aa4c72f12285e3092df96f60048
Currently the only consumer of ansible find_disks module is Ceph. And
Ceph OSD deployment in kolla uses GPT partition label to detect and
identify disks for Ceph OSD use. This is not always true for all the
deployment.
The change here extended the find_disks module by:
- adding `name` argument to find disk by either partition name or
filesystem label matching
- `partition_name` argument now becomes an alias to `name`
- adding `match_mode` argument to allow prefix matching. It is used for
swift disk detection.
- return `fs_label` key / value in result for disk mounting purpose
Change-Id: I9c93400c1826f5148acf09e9fbe555e358dfdfcc
Partially-Implements: blueprint swift-physical-disk
We use tcp connection rather than socket so we can remove the config
options related to it.
Additionally adjust the _extremely_ verbose logging from INFO to
WARNING.
TrivialFix
Change-Id: I88bf660134192f11732d012985df5c4f688419ba
After introduction of pull action and turing every main.yml into
{{action}}.yml we lost ability to perform upgrade.
Change-Id: Id6b5921bd1e3e7b196c4b3223920e51ae5e0b840
Closes-Bug: #1538210