ceilometer-collector service has been remove from kolla, the
ceilometer-collector group is unnecessary for inventory, this patch
to remove it.
Change-Id: Ib71b6819b5d475c7b55a2f4d13788f5c92841e10
ceph-mgr service is mandatory in ceph luminous
Depends-On: I875f84012a92d4f8b9dcb212d917cf61167270b8
Change-Id: I9418bf40a4bc3dcfc07c8b2eae17cb5779f5b444
Implements: blueprint ceph-luminous
This patch introduces inner-compute and external-compute nodes
group to distinguish compute nodes which do not have external
reachability from compute nodes which can reach outside.
Co-Authored-By: jinke <jin.ke@99cloud.net>
Co-Authored-By: yong sheng gong <gong.yongsheng@99cloud.net>
Change-Id: I45b945f7885e8243b017cf8607cbd7f9827cb6e9
Closes-bug: #1722026
when run kolla-ansible command, the priority of globals.yml is
higher than multinode, we should comment out the vars in globals.yml
and make mutinode's vars effect.
Change-Id: I0fe389ac1e5155f9779284ecc5afb524743faf16
This fix removes unnecessary deep nesting of host groups
for iscsid service in case of Ironic hosts.
Before: iscsid -> ironic-conductor -> ironic -> control
After: iscsid -> ironic -> control
Change-Id: Ie5393368ecbd3830f0ca01233d7b4a8ba782619a
Closes-Bug: #1716935
Rollout redis container in master/slave configuration
Deploy redis-sentinel and connect to redis cluster
Redis is needed for mistral coordination backend.
Partial-Bug: #1700591
Change-Id: Ic0269d0db10624925e7bcdbf0e33ae87b84a9cf2
Tacker has included a new conductor service
to manage mistral workflows for VIM monitoring.
Without conductor, Tacker cannot create VIMs.
This change reworks tacker to include tacker-conductor
service.
Depends-On: I52778e86e4f2c297ead8d4b09983e5e38ca88c70
Closes-Bug: #1710874
Change-Id: I6901e919887551bedc9dba8983ac904e8c48c9ce
In the old implementation, if there is no external ntp server, only one
local chrony server is supported. If multi chrony-server is configured,
chrony client can not sync with them.
In the new implementation
* use VIP to connect chrony-server, which ensure multi local chrony
servers are supported.
* chrony servers depend on VIP. So chrony-server group should be
the same with haproxy group.
* prevent chrony client sync from itself.
* Change owner to chrony:kolla for chrony log folder
* fix keysfile path
* use chrony user for centos and ubuntu image
* fix permission issue for /var/lib/chrony folder
Closes-Bug: #1705200
Change-Id: I6e85fda9824b5ddc7a96895425c5932a3566c27e
Adding the role needed to run the qdrouterd as an infrastructure
component which provides a messaging backend for the
oslo.messaging AMQP 1.0 driver. The qdrouterd will provide direct
messaging capabilities for the RPC messaging pattern in support
of hybrid messaging deployments.
Implements: blueprint qdrouterd-role
Change-Id: I74c654b3c70f61f81c2c7efa87f076a62a4a2dd8
mDNS publish DNS services to designate service customers.
Only network node should be reachable by public networks.
Change-Id: Id2947df89d2d831d67e006a581ac88b4ecf8ce04
Closes-Bug: #1693918
Implement an ansible role that adds Hyper-V as a compute node for
OpenStack using Kolla.
This will install and configure the Nova Compute service, the
Hyper-V Neutron agent and FreeRDP-WebConnect.
https://docs.openstack.org/ocata/config-reference/compute/hypervisor-hyper-v.html
Change-Id: I601835b0769c5ff173a980a05a752391ae8cc82f
Implements: blueprint hyperv-ansible-role
Co-Authored-By: Alessandro Pilotti <apilotti@cloudbasesolutions.com>
Certain services such as Murano and trove require access to a rabbitmq
instance from tenant networks. [0]
Exposing the internal rabbitmq to end users is a security hole, hence
there are two options, 1) use vhosts in the existing rabbitmq, or two a
separate rabbitmq instances. Given the importance of rabbitmq to the
OpenStack deployment, we have decided to go with a separate instance.
Refer to [1] for more detail on the various options.
This change makes the rabbitmq role generic so that it can be reused, in
this case to start 'outward_rabbitmq'. It needs to be exposed via
haproxy both for network isolation and also because this is what Murano
configuration requires.
Follow on patches will be added to add a vhost in this outward instance
for Murano and other services which require access.
Based on the original work by bdaca[2]
[0] http://murano.readthedocs.io/en/stable-liberty/intro/architecture.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109091.html
[2] https://review.openstack.org/#/c/374525
Change-Id: Ib2bcc7ed4bf4f883a7cd1dfad3db89201e3cfd8d
Partial-Bug: #1620374
Depends-On: I020eb6219f89a310451becde41f6f1c7f54baadd
Co-Authored-By: Bartłomiej Daca <bartek.daca@gmail.com>
Kuryr need etcd on each compute node to store
network data.
Etcd is only deployed in controller nodes at this moment.
Also this change remove and useless bootstrap tasks.
Depends-On: I9c6c876773288c2f951966498db0ff8af090ac20
Change-Id: I8a84334e831fb15f6cbdd3bc34d2159638df6b85
Closes-Bug: #1697699
This patch introduces the ansible materials to deploy
the skydive service, that can be used to monitor and
troubleshoot networking in an openstack deployment.
Implements: blueprint skydive-service
Co-Authored-By: Nicolas Bouron <nicolas.bouron@gmail.com>
Signed-off-by: Mathieu Rohon <mathieu.rohon@gmail.com>
Change-Id: I53051a1b0c85380416288e17040a398b6efb62c0
Creates Openvswitch role and splits
openvswitch from Neutron role to enable
third party networking solutions that use
Openvswitch or customize Openvswitch.
For example Openvswitch with dpdk or
OpenDaylight.
Change-Id: I5a41c42c5ec0a5e6999b2570ddac0f5efc3102ee
Co-Authored-By: Mauricio Lima <mauriciolimab@gmail.com>
Partially-Implements: blueprint opendaylight-support
Change-Id: I13cf03d6a97fb94dd7cb309e99a417ad101dc21a
Co-Authored-By: Mauricio Lima <mauriciolimab@gmail.com>
Partially-implements: bp add-zun-ansible-role
Given keepalived runs on the network node, we should have a minimum of
two by default for high availability.
Change-Id: Ifbd68e456dc93319df8e85017fd9f4db09f05929
It is not currently possible to deploy Bifrost on a host other than the
Ansible control host. A deployer may want to manage an Ansible control
environment remotely from the Bifrost deployment host but is currently
unable to.
This change adds a new top level 'deployment' Ansible group and a
'bifrost' Ansible group containing the 'deployment' group. The Ansible
play in ansible/bifrost.yml is now targeted at the hosts in the
'bifrost' group. For backwards compatibility, the all-in-one and
multinode inventories add localhost to the deployment group. This allows
a deployer to deploy Bifrost on a remote host by modifying the hosts in
the deployment or bifrost groups.
Change-Id: I76808feab5dd67dff63379ed9c7d08a105636acf
Closes-bug: #1665373
Implement ansible role to deploy designate
and dependencies. The backend used is bind9.
Co-Authored-By: zhubingbing <zhubingbing10@gmail.com>
Co-Authored-By: Eduardo Gonzalez <dabarren@gmail.com>
Depends-On: 6d0dc3e0f931c7c50b64a4659900cc50b0d860a2
Implements: blueprint ansible-designate
Change-Id: I34d8126e0cd8d71d5ced9b62f3776cc354fbb549