If enable Ceph MDS and RBD services, they are not conflicting
with Swift service.
Fix the check condition to make sure both Ceph and Swift exist.
Closes-Bug: #1747592
Change-Id: Icc6806125ce72992f7dff00c30d591ffb737a0c6
Signed-off-by: Tone.Zhang <tone.zhang@arm.com>
- Ceph images are not being built when using depends-on a kolla build
job.
- Sync inventory files with current ones which causes ceph to fail due
missing groups.
- Small corrections in ceph config.yml syntax.
- fix preparse the disk only once
- enable ceph nfs only when enable_ceph_nfs is true
Co-Authored-By: Jeffrey Zhang <zhang.lei.fly@gmail.com>
Change-Id: Id0c7963bf59e2af4944834dcd16589a638e78ba5
Now we have upgraded to ceph Luminous, need start ceph mgr during
upgrade from Pike( which is ceph Jewel ).
Implements: blueprint ceph-luminous
Change-Id: I16ac0fc5d963b5725f9a19ecd396290fea7c0399
ceph-mgr service is mandatory in ceph luminous
Depends-On: I875f84012a92d4f8b9dcb212d917cf61167270b8
Change-Id: I9418bf40a4bc3dcfc07c8b2eae17cb5779f5b444
Implements: blueprint ceph-luminous
when ceph-mon and ceph-osd run on difference hosts
such as ceph-mon run on controller node, ceph-osd
run on storage node, and controller node have no
cluster_interface, when it happens, kolla-ansible
will failed.
Closes-Bug: #1735775
Change-Id: I8d6bc66d41c544ab9e7e1b126127e25c70a22933
When deploying on debian, it reports error:
stat /usr/bin/ansible: no such file or directory
That's because on Debian and Ubuntu pip install ansible to
/usr/local/bin/ansible, whereas on CentOS the location is
/usr/bin/ansible.
Change to ansible to handle both cases.
Closes-Bug: #1729216
Depends-On: I2b57403128bc103148ae696c219df52590214adc
Change-Id: I025037cf48596450e6479ab7ff6425c48ac73aad
Signed-off-by: Xinliang Liu <xinliang.liu@linaro.org>
When deploying with tls enabled in public
endpoints, ansible modules fails due SSL certificates
are self-signed.
This change adds a new variable to allow customization
on which endpoints ansible should connect.
Defaults to admin because admin auth parameters defaults
to admin endpoint.
Change-Id: Ic3ed58cf9c9579cae08a11bbfe6fce983b5a9cbc
Closes-Bug: #1720995
In order to speed up deployment time some "local" actions should be run
only once using 'run_once: True'.
This will decrease deployment time in case of multihost configuration.
Change-Id: I6015d772d35c15e96c52f577013b6e41197cb41a
Ansible task support vars directive, no need implement another one in
merge_config. This patch remove the vars directive in merge_config
action plugin.
Change-Id: I33648a2b6e39b4d49ce76eb66fbf2522721f8c68
always_run is deprecated and removed in Ansible 2.4
check_mode is introduced in Ansible 2.2 and Kolla-ansible bump Ansible to
2.2.0 so it's safe to replace always_run by check_mode now.
Change-Id: Id1028d38b7bde30a6afe17b319dcdc77907914ab
Closes-Bug: #1643633
Implements: blueprint migrate-to-ansible-2-2-0
Since whole issue was related to check whether user wants to wipe
device, loopbacks can be opt out from this warnings
Change-Id: Idd823b282e3055457ed041a98c848deb8509cc30
Closes-Bug: #1667074
Currently TCMalloc's default tc size is 32MB.
This causes poor performance in ceph storage.
A new ceph_tcmalloc_tc_bytes option has been added
with a default of 128MB.
128 MB is default TC size at above jewel version.
and if we don't set this config,
osd daemon will running with 32 MB.
because 32MB is default size in TCmalloc 2.4 version.
32MB and 128MB are twice the performance difference.
- reference : https://www.slideshare.net/Red_Hat_Storage/
ceph-performance-projects-leading-up-to-jewel-61050682
Closes-Bug: #1693692
Change-Id: I0d25c92917b11a29bcfd18f9c129cae328fa2d3e
Signed-off-by: jangseon ryu <jangseon.ryu@navercorp.com>
[WARNING]: when statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: {{
(keystone_bootstrap.stdout | from_json).changed }}
Closes-Bug: #1689550
Change-Id: Ib6fdbcde02319011b072990f06fbd5e74b8d2d93
wait_for module waits 300 seconds for the port started or stopped. This
is meaningless and useless in precheck. This patch change timeout to 1
seconds.
Change-Id: I9b251ec4ba17ce446655917e8ef5e152ef947298
Closes-Bug: #1688152
Generally we specify the user is root when deploying ceph, it is no
problem. But if we have the need to use a non-root account, the deployment
will fail because the non-root account can't use the mount command.
I think it is necessary to add sudo for non-root account, when we can't use
the root account to deploy ceph because of security needs, we can use
non-root account to deploy ceph.
Change-Id: Iea1f30bcf8edbe15dc65909bbae780b55a669067
Closes-Bug: #1668823
Add a new subcommand 'check' to kolla-ansible, used to run the
smoke/sanity checks.
Add stub files to all services that don't currently have checks.
Change-Id: I9f661c5fc51fd5b9b266f23f6c524884613dee48
Partially-implements: blueprint sanity-check-container
This change adds a variable to the Ceph role "kolla_ceph_use_udev",
which is True by default meaning no change to the current behaviour. If
set to False, it will fallback to tools such as sgdisk/blkid to parse
the disk info it needs, instead of using udev.
Change-Id: I88d7b73efe27f04bb1ba16d61e101fa14a9f0d81
Depends-On: I6ad7825cdb164498f3d02f2ae064c7c1c38e10d5
Closes-Bug: #1631949