This allows users to supply an Elasticsearch Curator actions file
to manage log retention [1]. Curator then runs on a cron job, which
defaults to every day. A default curator actions file is provided,
which can be customised by the end user if required.
[1] https://www.elastic.co/guide/en/elasticsearch/client/curator/current/actionfile.html
Change-Id: Ide9baea9190ae849e61b9d8b6cff3305bdcdd534
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
This allows the install type for the project to be different than
kolla_install_type
This can be used to avoid hitting bug 1786238, since kuryr only supports
the source type.
Change-Id: I2b6fc85bac092b1614bccfd22bee48442c55dda4
Closes-Bug: #1786238
Currently, we have a lot of logic for checking if a handler should run,
depending on whether config files have changed and whether the
container configuration has changed. As rm_work pointed out during
the recent haproxy refactor, these conditionals are typically
unnecessary - we can rely on Ansible's handler notification system
to only trigger handlers when they need to run. This removes a lot
of error prone code.
This patch removes conditional handler logic for all services. It is
important to ensure that we no longer trigger handlers when unnecessary,
because without these checks in place it will trigger a restart of the
containers.
Implements: blueprint simplify-handlers
Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
We don't add extra volumes support for all services in patch [1].
In order to unify the management of the volume, so we need add extra volumes
support for these services.
[1] 12ff28a693
Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255
Partially-Implements: blueprint support-extra-volumes
Signed-off-by: ZijianGuo <guozijn@gmail.com>
Many tasks that use Docker have become specified already, but
not all. This change ensures all tasks that use the following
modules have become:
* kolla_docker
* kolla_ceph_keyring
* kolla_toolbox
* kolla_container_facts
It also adds become for 'command' tasks that use docker CLI.
Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
The synced flush fails due to concurrent indexing operations.
The HTTP status code in that case will be 409 CONFLICT. We can
retry this task until returns success.
Change-Id: I57f9a009b12715eed8dfcf829a71f418d2ce437b
With this change, an operator may be able to stop a
service container without stopping all services in a host.
This change is the starting point to start
fast-forward upgrades support.
In next changes new flags will be introducced to disable
stop dataplane services during upgrades.
Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
Implements: blueprint support-stop-containers
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
This commit is to apply resource-constraints only to few OpenStack services.
Commit to apply constraints to other services will be made in coming commits.
Partially-Implements: blueprint resource-constraints
Change-Id: Icafa54baca24d2de64238222a5677b9d8b90e2aa
Add become to all tasks that use the module "kolla_docker"
Change-Id: I4309c4011687b88ec31d739fd8f834fe2326ff10
Partial-Implements: blueprint ansible-specific-task-become
The default index isn't created automatically when using
ELK > 5.x. This commit adds a new task into the post-deploy
taskset to force the creation of the default index.
This patch also enforces the kibana config to set index
as defaultIndex without the need for the defaultIndex property
to already exists , preventing the error exposed on the public bug.
Also added a new option named kibana_default_index_options, with
a default index.mapper.dynamic set to true, this option
can be extended by the operator and they will be applied to the index.
Change-Id: Ica63a5fb30947f7ebae6cf8c80500a7dd0907211
Closes-Bug: #1772687
Signed-off-by: Jorge Niedbalski <jorge.niedbalski@linaro.org>
After enabling the elasticsearch debian/ubuntu
images the container doesn't starts with ELK 5.4.1
as ES_HEAP_SIZE has been deprecated.
We should use ES_JAVA_OPTS with the -Xms/Xmx options instead.
Closes-Bug: #1772482
Change-Id: I9b368468d41421d679a9c4ad6fdf595863de7a1a
Signed-off-by: Jorge Niedbalski <jorge.niedbalski@linaro.org>
- rename action and serial to kolla_ansible and kolla_serial
- use become instead of "sudo <command>" in shell
- Remove quota for failed_when and changed_when in rabbitmq tasks
Change-Id: I78cb60168aaa40bb6439198283546b7faf33917c
Implements: blueprint migrate-to-ansible-2-2-0
When using newer Elasticsearch versions you will receive the following error
when trying to start: elasticsearch | max virtual memory areas
vm.max_map_count [65530] likely too low, increase to at least [262144]
This could be solved by increasing the vm.max_map_count value via sysctl.
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
Change-Id: I9fa7b0222104d68bdfc838637acf5e36c479e6b2
Closes-bug: #1719528
kolla-kubernetes is using its own configuration generation[0], so it is
time for kolla-ansible to remove the related code to simplify the
logical.
[0] https://github.com/openstack/kolla-kubernetes/tree/master/ansible
Change-Id: I7bb0b7fe3b8eea906613e936d5e9d19f4f2e80bb
Implements: blueprint clean-k8s-config
wait_for module waits 300 seconds for the port started or stopped. This
is meaningless and useless in precheck. This patch change timeout to 1
seconds.
Change-Id: I9b251ec4ba17ce446655917e8ef5e152ef947298
Closes-Bug: #1688152
Add a new subcommand 'check' to kolla-ansible, used to run the
smoke/sanity checks.
Add stub files to all services that don't currently have checks.
Change-Id: I9f661c5fc51fd5b9b266f23f6c524884613dee48
Partially-implements: blueprint sanity-check-container
do_reconfigure.yml is introduced to use serial directive. But we use
it in wrong. Now serial has moved to playbook file. So it is time to
remove the do_reconfigure.yml file
Closes-Bug: #1628152
Change-Id: I8d42d27e6bc302a0e575b0353956eaef9b2ca9fd
This PS modifies Elasticsearch and Kibana configuration templates
so genconfig can genarate Kubernetes compatible config for these
services.
TrivialFix
Change-Id: I6886307a163a9ca27a29e6dbdac4f9080948e4b5
Useful for upgrade etc., which is preferablly done serially.
Example usage: tools/kolla-ansible deploy OR tools/kolla-ansible upgrade
Closes-Bug: #1576708
DocImpact
Change-Id: I34b2e16f8ce53e472a4682a4738c4ac0f5abf00c
Add Ansbile reconfigure playbook to Elasticsearch role.
Add run condition to start playbook in Elasticsearch role.
Change-Id: I7862089cae55d392eb2d922f89a382d392cf8b97
Closes-Bug: #1616005
This patch includes changes relative to integrating Heka with
Elasticsearch and Kibana.
The main change is the addition of an Heka ElasticSearchOutput plugin
to make Heka send the logs it collects to Elasticsearch.
Since Logstash is not used the enable_elk deploy variable is renamed
to enable_central_logging.
If enable_central_logging is false then Elasticsearch and Kibana are
not started, and Heka won't attempt to send logs to Elasticsearch.
By default enable_central_logging is set to false. If
enable_central_logging is set to true after deployment then the Heka
container needs to be recreated (for Heka to get the new
configuration).
The Kibana configuration used property names that are deprecated in
Kibana 4.2. This is changed to use non-deprecated property names.
Previously logs read from files and from Syslog had a different Type
in Heka. This is changed to always use "log" for the Type. In this
way just one index instead of two is used in Elasticsearch, making
things easier to the user on the visualization side.
The HAProxy configuration is changed to add entries for Kibana.
Kibana server is now accessible via the internal VIP, and also via
the external VIP if there's one configured.
The HAProxy configuration is changed to add an entry for
Elasticsearch. So Elasticsearch is now accessible via the internal
VIP. Heka uses that channel for communicating with Elasticsearch.
Note that currently the Heka logs include "Plugin
elasticsearch_output" errors when Heka starts. This occurs when Heka
starts processing logs while Elasticsearch is not yet started. These
are transient errors that go away when Elasticsearch is ready. And
with buffering enabled on the ElasticSearchOuput plugin logs will be
buffered and then retransmitted when Elasticsearch is ready.
Change-Id: I6ff7a4f0ad04c4c666e174693a35ff49914280bb
Implements: blueprint central-logging-service
Due to poor planning on our variable names we have a situation where
we have "internal_address" which must be a VIP, but "external_address"
which should be a DNS name. Now with two vips "external_vip_address"
is a new variable.
This corrects that issue by deprecating kolla_internal_address and
replacing it with 4 nicely named variables.
kolla_internal_vip_address
kolla_internal_fqdn
kolla_external_vip_address
kolla_external_fqdn
The default behaviour will remain the same, and the way the variable
inheritance is setup the kolla_internal_address variable can still be
set in globals.yml and propogate out to these 4 new variables like it
normally would, but all reference to kolla_internal_address has been
completely removed.
Change-Id: I4556dcdbf4d91a8d2751981ef9c64bad44a719e5
Partially-Implements: blueprint ssl-kolla
This should be later replaced with actual upgrade logic
Change-Id: I1c386a7f3bc0d15ebe4a47d2628833172a15f89b
Partially-implements: blueprint upgrade-kolla
Partially-implements: blueprint upgrade-elasticseatch
Part of ELK stack. Includes Dockerfiles for both Centos and Ubuntu.
Change-Id: I9f76adf084cd4f68e29326112b76ffd02b5adada
Partially-implements: blueprint central-logging-service