Config updates for elk 7.x

Updated ELK config files to elk 7.x reference samples, bringing over
existing customisation from elk_metrics_6x.

Removed deprecated use of --pipeline in elastic_beat_setup/tasks/main.yml,
--pipeline is no longer a valid cli argument.

Updated logstash-pipelines and removed the dynamic insertion of the date into
index names. This function is now done with the new ILM feature in elasticsearch
rather than logstash.

Installation of each beat creates an ILM policy for that beat and this patch
does not change the default policy. It is possible that the default policy
will exhaust the available storage and future work needs to be done to address
this.

The non-beat elements of the logstash pipeline (syslog, collectd and others)
are not yet updated to be compatible with ILM.

Change-Id: I735b64c2b7b93e23562f35266134a176a00af1b7
This commit is contained in:
Georgina Shippey 2019-07-09 12:06:25 +01:00 committed by Jonathan Rosser
parent 4e9c5c5c39
commit 68664a9dc1
51 changed files with 2118 additions and 1589 deletions

View File

@ -246,7 +246,7 @@ Copy the env.d file into place
.. code-block:: bash .. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x cd /opt/openstack-ansible-ops/elk_metrics_7x
cp env.d/elk.yml /etc/openstack_deploy/env.d/ cp env.d/elk.yml /etc/openstack_deploy/env.d/
Copy the conf.d file into place Copy the conf.d file into place
@ -312,7 +312,7 @@ deploy logstash, deploy Kibana, and then deploy all of the service beats.
.. code-block:: bash .. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x cd /opt/openstack-ansible-ops/elk_metrics_7x
ansible-playbook site.yml $USER_VARS ansible-playbook site.yml $USER_VARS
@ -332,7 +332,7 @@ deploy logstash, deploy Kibana, and then deploy all of the service beats.
.. code-block:: bash .. code-block:: bash
ln -s /opt/openstack-ansible/inventory/group_vars /opt/openstack-ansible-ops/elk_metrics_6x/group_vars ln -s /opt/openstack-ansible/inventory/group_vars /opt/openstack-ansible-ops/elk_metrics_7x/group_vars
The individual playbooks found within this repository can be independently run The individual playbooks found within this repository can be independently run
@ -434,7 +434,7 @@ configuration file using the key/value pairs as options.
- server1.local:9092 - server1.local:9092
- server2.local:9092 - server2.local:9092
- server3.local:9092 - server3.local:9092
client_id: "elk_metrics_6x" client_id: "elk_metrics_7x"
compression_type: "gzip" compression_type: "gzip"
security_protocol: "SSL" security_protocol: "SSL"
id: "UniqueOutputID" id: "UniqueOutputID"
@ -472,7 +472,7 @@ See the grafana directory for more information on how to deploy grafana. Once
When deploying grafana, source the variable file from ELK in order to When deploying grafana, source the variable file from ELK in order to
automatically connect grafana to the Elasticsearch datastore and import automatically connect grafana to the Elasticsearch datastore and import
dashboards. Including the variable file is as simple as adding dashboards. Including the variable file is as simple as adding
``-e @../elk_metrics_6x/vars/variables.yml`` to the grafana playbook ``-e @../elk_metrics_7x/vars/variables.yml`` to the grafana playbook
run. run.
Included dashboards. Included dashboards.
@ -485,7 +485,7 @@ Example command using the embedded Ansible from within the grafana directory.
.. code-block:: bash .. code-block:: bash
ansible-playbook ${USER_VARS} installGrafana.yml \ ansible-playbook ${USER_VARS} installGrafana.yml \
-e @../elk_metrics_6x/vars/variables.yml \ -e @../elk_metrics_7x/vars/variables.yml \
-e 'galera_root_user="root"' \ -e 'galera_root_user="root"' \
-e 'galera_address={{ internal_lb_vip_address }}' -e 'galera_address={{ internal_lb_vip_address }}'
@ -566,7 +566,7 @@ state variable, `elk_package_state`, to latest.
.. code-block:: bash .. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x cd /opt/openstack-ansible-ops/elk_metrics_7x
ansible-playbook site.yml $USER_VARS -e 'elk_package_state="latest"' ansible-playbook site.yml $USER_VARS -e 'elk_package_state="latest"'
@ -582,7 +582,7 @@ execution.
.. code-block:: bash .. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x cd /opt/openstack-ansible-ops/elk_metrics_7x
ansible-playbook site.yml $USER_VARS -e 'elastic_retention_refresh="yes"' ansible-playbook site.yml $USER_VARS -e 'elastic_retention_refresh="yes"'
@ -593,7 +593,7 @@ If everything goes bad, you can clean up with the following command
.. code-block:: bash .. code-block:: bash
openstack-ansible /opt/openstack-ansible-ops/elk_metrics_6x/site.yml -e 'elk_package_state="absent"' --tags package_install openstack-ansible /opt/openstack-ansible-ops/elk_metrics_7x/site.yml -e 'elk_package_state="absent"' --tags package_install
openstack-ansible /opt/openstack-ansible/playbooks/lxc-containers-destroy.yml --limit elk_all openstack-ansible /opt/openstack-ansible/playbooks/lxc-containers-destroy.yml --limit elk_all
@ -616,14 +616,14 @@ deployed to the environment as if this was a production installation.
After the test build is completed the cluster will test it's layout and ensure After the test build is completed the cluster will test it's layout and ensure
processes are functioning normally. Logs for the cluster can be found at processes are functioning normally. Logs for the cluster can be found at
`/tmp/elk-metrics-6x-logs`. `/tmp/elk-metrics-7x-logs`.
To rerun the playbooks after a test build, source the `tests/manual-test.rc` To rerun the playbooks after a test build, source the `tests/manual-test.rc`
file and follow the onscreen instructions. file and follow the onscreen instructions.
To clean-up a test environment and start from a bare server slate the To clean-up a test environment and start from a bare server slate the
`run-cleanup.sh` script can be used. This script is distructive and will purge `run-cleanup.sh` script can be used. This script is distructive and will purge
all `elk_metrics_6x` related services within the local test environment. all `elk_metrics_7x` related services within the local test environment.
.. code-block:: bash .. code-block:: bash

View File

@ -1,13 +1,13 @@
--- ---
- name: systemd_service - name: systemd_service
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_service src: https://opendev.org/openstack/ansible-role-systemd_service
version: master version: master
- name: systemd_mount - name: systemd_mount
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_mount src: https://opendev.org/openstack/ansible-role-systemd_mount
version: master version: master
- name: config_template - name: config_template
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-config_template src: https://opendev.org/openstack/ansible-config_template
version: master version: master

View File

@ -15,6 +15,9 @@
hosts: "elastic-logstash[0]" hosts: "elastic-logstash[0]"
become: true become: true
roles:
- role: elastic_data_hosts
vars: vars:
_elastic_refresh_interval: "{{ (elasticsearch_number_of_replicas | int) * 5 }}" _elastic_refresh_interval: "{{ (elasticsearch_number_of_replicas | int) * 5 }}"
elastic_refresh_interval: "{{ (_elastic_refresh_interval > 0) | ternary(30, _elastic_refresh_interval) }}" elastic_refresh_interval: "{{ (_elastic_refresh_interval > 0) | ternary(30, _elastic_refresh_interval) }}"
@ -24,9 +27,6 @@
environment: "{{ deployment_environment_variables | default({}) }}" environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_retention
post_tasks: post_tasks:
- name: Create beat indexes - name: Create beat indexes
uri: uri:
@ -41,7 +41,7 @@
delay: 30 delay: 30
with_items: |- with_items: |-
{% set beat_indexes = [] %} {% set beat_indexes = [] %}
{% for key, value in elastic_beat_retention_policy_hosts.items() %} {% for key, value in elastic_beats.items() %}
{% if ((value.hosts | length) > 0) and (value.make_index | default(false) | bool) %} {% if ((value.hosts | length) > 0) and (value.make_index | default(false) | bool) %}
{% {%
set _index = { set _index = {
@ -124,7 +124,7 @@
index_option: index_option:
index_patterns: >- index_patterns: >-
{{ {{
(elastic_beat_retention_policy_hosts.keys() | list) (elastic_beats.keys() | list)
| map('regex_replace', '(.*)', '\1-' ~ '*') | map('regex_replace', '(.*)', '\1-' ~ '*')
| list | list
}} }}
@ -152,7 +152,7 @@
order: 1 order: 1
settings: settings:
number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}" number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
number_of_shards: "{{ ((elasticsearch_number_of_replicas | int) * 2) + 1 }}" number_of_shards: 1
- name: Create custom skydive index template - name: Create custom skydive index template
uri: uri:
@ -171,7 +171,7 @@
order: 1 order: 1
settings: settings:
number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}" number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
number_of_shards: "{{ ((elasticsearch_number_of_replicas | int) * 2) + 1 }}" number_of_shards: 1
- name: Create/Setup known indexes in Kibana - name: Create/Setup known indexes in Kibana
@ -183,10 +183,10 @@
environment: "{{ deployment_environment_variables | default({}) }}" environment: "{{ deployment_environment_variables | default({}) }}"
roles: roles:
- role: elastic_retention - role: elastic_data_hosts
post_tasks: post_tasks:
- name: Create kibana indexe patterns - name: Create kibana index patterns
uri: uri:
url: "http://127.0.0.1:5601/api/saved_objects/index-pattern/{{ item.name }}" url: "http://127.0.0.1:5601/api/saved_objects/index-pattern/{{ item.name }}"
method: POST method: POST
@ -198,7 +198,7 @@
kbn-xsrf: "{{ inventory_hostname | to_uuid }}" kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
with_items: |- with_items: |-
{% set beat_indexes = [] %} {% set beat_indexes = [] %}
{% for key, value in elastic_beat_retention_policy_hosts.items() %} {% for key, value in elastic_beats.items() %}
{% if (value.hosts | length) > 0 %} {% if (value.hosts | length) > 0 %}
{% {%
set _index = { set _index = {
@ -219,7 +219,7 @@
{% set _ = beat_indexes.append(_index) %} {% set _ = beat_indexes.append(_index) %}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% set _ = beat_indexes.append({'name': 'default', 'index_options': {'attributes': {'title': '*'}}}) %} {% set _ = beat_indexes.append({'name': 'default', 'index_options': {'attributes': {'timeFieldName': '@timestamp', 'title': '*'}}}) %}
{{ beat_indexes }} {{ beat_indexes }}
register: kibana_indexes register: kibana_indexes
until: kibana_indexes is success until: kibana_indexes is success

View File

@ -1,30 +0,0 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Curator
hosts: "elastic-logstash"
become: true
gather_facts: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_curator
tags:
- beat-install

View File

@ -15,7 +15,7 @@
# Each setup flag is run one at a time. # Each setup flag is run one at a time.
elastic_setup_flags: elastic_setup_flags:
- "--template" - "--index-management"
- "--pipelines" - "--pipelines"
# - "--dashboards" # - "--dashboards"

View File

@ -41,7 +41,11 @@
sed -i 's@"id": "{{ elastic_beat_name }}\-\*",@"id": "{{ elastic_beat_name }}",@g' /usr/share/{{ elastic_beat_name }}/kibana/6/index-pattern/*.json sed -i 's@"id": "{{ elastic_beat_name }}\-\*",@"id": "{{ elastic_beat_name }}",@g' /usr/share/{{ elastic_beat_name }}/kibana/6/index-pattern/*.json
{% endif %} {% endif %}
{{ elastic_beat_name }} setup {{ elastic_beat_name }} setup
{{ item }} {% if elastic_beat_name == "heartbeat" and item == "--index-management" -%}
--template
{%- else -%}
{{ item }}
{%- endif %}
{{ elastic_beat_setup_options }} {{ elastic_beat_setup_options }}
-e -v -e -v
with_items: "{{ elastic_setup_flags }}" with_items: "{{ elastic_setup_flags }}"
@ -53,10 +57,10 @@
delay: 5 delay: 5
run_once: true run_once: true
when: when:
- ((ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] is undefined) or - (((ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] is undefined) or
(not (ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] | bool))) or (not (ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] | bool))) or
((elk_package_state | default('present')) == "latest") or ((elk_package_state | default('present')) == "latest") or
(elk_beat_setup | default(false) | bool) (elk_beat_setup | default(false) | bool)) and not (elastic_beat_name == "heartbeat" and item == "--pipelines")
tags: tags:
- setup - setup

View File

@ -1,25 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart curator.timer
systemd:
name: "curator.timer"
enabled: true
state: restarted
when:
- (elk_package_state | default('present')) != 'absent'
- ansible_service_mgr == 'systemd'
tags:
- config

View File

@ -1,34 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x curator role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_retention

View File

@ -1,46 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Run the systemd service role
include_role:
name: systemd_service
vars:
systemd_service_enabled: "{{ ((elk_package_state | default('present')) != 'absent') | ternary(true, false) }}"
systemd_service_restart_changed: false
systemd_user_name: curator
systemd_group_name: curator
systemd_services:
- service_name: "curator"
execstarts:
- /opt/elasticsearch-curator/bin/curator
--config /var/lib/curator/curator.yml
/var/lib/curator/actions-age.yml
timer:
state: "started"
options:
OnBootSec: 30min
OnUnitActiveSec: 12h
Persistent: true
- service_name: "curator-size"
execstarts:
- /opt/elasticsearch-curator/bin/curator
--config /var/lib/curator/curator.yml
/var/lib/curator/actions-size.yml
timer:
state: "started"
options:
OnBootSec: 30min
OnUnitActiveSec: 1h
Persistent: true

View File

@ -1,32 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Create cron job for curator (age)
cron:
name: "Run curator"
minute: "0"
hour: "1"
user: "curator"
job: "/opt/elasticsearch-curator/bin/curator --config /var/lib/curator/curator.yml /var/lib/curator/actions-age.yml"
cron_file: "elasticsearch-curator"
- name: Create cron job for curator (size)
cron:
name: "Run curator"
minute: "0"
hour: "*/5"
user: "curator"
job: "/opt/elasticsearch-curator/bin/curator --config /var/lib/curator/curator.yml /var/lib/curator/actions-size.yml"
cron_file: "elasticsearch-curator"

View File

@ -1,103 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
- name: Ensure virtualenv is installed
package:
name: "{{ curator_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
tags:
- package_install
- name: Create the virtualenv (if it does not exist)
command: "virtualenv --never-download --no-site-packages /opt/elasticsearch-curator"
args:
creates: "/opt/elasticsearch-curator/bin/activate"
- name: Ensure curator is installed
pip:
name: "elasticsearch-curator<6"
state: "{{ elk_package_state | default('present') }}"
extra_args: --isolated
virtualenv: /opt/elasticsearch-curator
register: _pip_task
until: _pip_task is success
retries: 3
delay: 2
tags:
- package_install
- name: create the system group
group:
name: "curator"
state: "present"
system: "yes"
- name: Create the curator system user
user:
name: "curator"
group: "curator"
comment: "curator user"
shell: "/bin/false"
createhome: "yes"
home: "/var/lib/curator"
- name: Create curator data path
file:
path: "{{ item }}"
state: directory
owner: "curator"
group: "curator"
mode: "0755"
recurse: true
with_items:
- "/var/lib/curator"
- "/var/log/curator"
- "/etc/curator"
- name: Drop curator conf file(s)
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
with_items:
- src: "curator.yml.j2"
dest: /var/lib/curator/curator.yml
- src: "curator-actions-age.yml.j2"
dest: /var/lib/curator/actions-age.yml
- src: "curator-actions-size.yml.j2"
dest: /var/lib/curator/actions-size.yml
notify:
- Enable and restart curator.timer
- include_tasks: "curator_{{ ansible_service_mgr }}.yml"

View File

@ -1,65 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{% set action_items = [] -%}
{# Delete index loop #}
{% for key in (ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] | from_yaml) -%}
{% set delete_indices = {} -%}
{# Total retention size in days #}
{% set _index_retention = ansible_local['elastic']['retention']['elastic_' + key + '_retention'] -%}
{% set index_retention = ((_index_retention | int) > 0) | ternary(_index_retention, 1) | int %}
{% set _ = delete_indices.update(
{
'action': 'delete_indices',
'description': 'Prune indices for ' + key + ' after ' ~ index_retention ~ ' days',
'options': {
'ignore_empty_list': true,
'disable_action': false
}
}
)
-%}
{% set filters = [] -%}
{% set _ = filters.append(
{
'filtertype': 'pattern',
'kind': 'prefix',
'value': key
}
)
-%}
{% set _ = filters.append(
{
'filtertype': 'age',
'source': 'name',
'direction': 'older',
'timestring': '%Y.%m.%d',
'unit': 'days',
'unit_count': index_retention
}
)
-%}
{% set _ = delete_indices.update({'filters': filters}) -%}
{% set _ = action_items.append(delete_indices) -%}
{% endfor -%}
{% set actions = {} -%}
{% for action_item in action_items -%}
{% set _ = actions.update({loop.index: action_item}) -%}
{% endfor -%}
{# Render all actions #}
{% set curator_actions = {'actions': actions} -%}
{{ curator_actions | to_nice_yaml(indent=2) }}

View File

@ -1,63 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{% set action_items = [] -%}
{# Delete index loop #}
{% for key in (ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] | from_yaml) -%}
{% set delete_indices = {} -%}
{# Total retention size in gigabytes #}
{% set _index_size = ((ansible_local['elastic']['retention']['elastic_' + key + '_size'] | int) // 1024) -%}
{% set index_size = ((_index_size | int) > 0) | ternary(_index_size, 1) | int %}
{% set _ = delete_indices.update(
{
'action': 'delete_indices',
'description': 'Prune indices for ' + key + ' after index is > ' ~ index_size ~ 'gb',
'options': {
'ignore_empty_list': true,
'disable_action': false
}
}
)
-%}
{% set filters = [] -%}
{% set _ = filters.append(
{
'filtertype': 'pattern',
'kind': 'prefix',
'value': key
}
)
-%}
{% set _ = filters.append(
{
'filtertype': 'space',
'disk_space': index_size,
'use_age': true,
'source': 'creation_date'
}
)
-%}
{% set _ = delete_indices.update({'filters': filters}) -%}
{% set _ = action_items.append(delete_indices) -%}
{% endfor -%}
{% set actions = {} -%}
{% for action_item in action_items -%}
{% set _ = actions.update({loop.index: action_item}) -%}
{% endfor -%}
{# Render all actions #}
{% set curator_actions = {'actions': actions} -%}
{{ curator_actions | to_nice_yaml(indent=2) }}

View File

@ -1,32 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
client:
hosts:
- {{ ansible_host }}
port: {{ elastic_port }}
url_prefix: ""
use_ssl: false
ssl_no_validate: true
http_auth: ""
timeout: 120
master_only: true
logging:
loglevel: INFO
logfile: /var/log/curator/curator
logformat: default
blacklist:
- elasticsearch
- urllib3

View File

@ -1,17 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -1,17 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -1,17 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -1,18 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv
- virtualenv

View File

@ -15,6 +15,17 @@
{% set _ = icmp_hosts.extend([hostvars[host_item]['ansible_host']]) %} {% set _ = icmp_hosts.extend([hostvars[host_item]['ansible_host']]) %}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: false
# How often to check for changes
reload.period: 5s
# Configure monitors # Configure monitors
heartbeat.monitors: heartbeat.monitors:
- type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping - type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping
@ -27,7 +38,7 @@ heartbeat.monitors:
enabled: true enabled: true
# Configure task schedule using cron-like syntax # Configure task schedule using cron-like syntax
schedule: '@every 30s' # every 30 seconds from start of beat schedule: '*/30 * * * * * *' # exactly every 30 seconds like 10:00:00, 10:00:30, ...
# List of hosts to ping # List of hosts to ping
hosts: {{ (icmp_hosts | default([])) | to_json }} hosts: {{ (icmp_hosts | default([])) | to_json }}
@ -37,14 +48,6 @@ heartbeat.monitors:
ipv6: true ipv6: true
mode: any mode: any
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Total running time per ping test. # Total running time per ping test.
timeout: {{ icmp_hosts | length }}s timeout: {{ icmp_hosts | length }}s
@ -100,13 +103,27 @@ heartbeat.monitors:
# by sending/receiving a custom payload # by sending/receiving a custom payload
# Monitor name used for job name and document type # Monitor name used for job name and document type
name: {{ item.name }}
# Enable/Disable monitor
enabled: true
# Configure task schedule
schedule: '@every 30s' # every 30 seconds from start of beat
# configure hosts to ping.
# Entries can be:
# - plain host name or IP like `localhost`:
# Requires ports configs to be checked. If ssl is configured,
# a SSL/TLS based connection will be established. Otherwise plain tcp connection
# will be established
name: "{{ item.name }}" name: "{{ item.name }}"
# Enable/Disable monitor # Enable/Disable monitor
enabled: true enabled: true
# Configure task schedule # Configure task schedule
schedule: '@every 45s' # every 30 seconds from start of beat schedule: '@every 45s' # every 5 seconds from start of beat
# configure hosts to ping. # configure hosts to ping.
# Entries can be: # Entries can be:
@ -132,14 +149,6 @@ heartbeat.monitors:
ipv6: true ipv6: true
mode: any mode: any
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# List of ports to ping if host does not contain a port number # List of ports to ping if host does not contain a port number
# ports: [80, 9200, 5044] # ports: [80, 9200, 5044]
@ -178,6 +187,15 @@ heartbeat.monitors:
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}
{% if hosts | length > 0 %} {% if hosts | length > 0 %}
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
- type: http # monitor type `http`. Connect via HTTP an optionally verify response - type: http # monitor type `http`. Connect via HTTP an optionally verify response
# Monitor name used for job name and document type # Monitor name used for job name and document type
@ -187,7 +205,7 @@ heartbeat.monitors:
enabled: true enabled: true
# Configure task schedule # Configure task schedule
schedule: '@every 60s' # every 30 seconds from start of beat schedule: '@every 60s' # every 5 seconds from start of beat
# Configure URLs to ping # Configure URLs to ping
urls: {{ (hosts | default([])) | to_json }} urls: {{ (hosts | default([])) | to_json }}
@ -196,7 +214,7 @@ heartbeat.monitors:
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`. # Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true ipv4: true
ipv6: true ipv6: true
mode: "any" mode: any
# Configure file json file to be watched for changes to the monitor: # Configure file json file to be watched for changes to the monitor:
#watch.poll_file: #watch.poll_file:
@ -206,7 +224,7 @@ heartbeat.monitors:
# Interval between file file changed checks. # Interval between file file changed checks.
#interval: 5s #interval: 5s
# Optional HTTP proxy url. If not set HTTP_PROXY environment variable will be used. # Optional HTTP proxy url.
#proxy_url: '' #proxy_url: ''
# Total test connection and data exchange timeout # Total test connection and data exchange timeout
@ -233,7 +251,6 @@ heartbeat.monitors:
# Dictionary of additional HTTP headers to send: # Dictionary of additional HTTP headers to send:
headers: headers:
User-agent: osa-heartbeat-healthcheck User-agent: osa-heartbeat-healthcheck
# Optional request body content # Optional request body content
#body: #body:
@ -255,6 +272,24 @@ heartbeat.monitors:
{% endif %} {% endif %}
{% endfor %} {% endfor %}
# Parses the body as JSON, then checks against the given condition expression
#json:
#- description: Explanation of what the check does
# condition:
# equals:
# myField: expectedValue
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
heartbeat.scheduler: heartbeat.scheduler:
# Limit number of concurrent tasks executed by heartbeat. The task limit if # Limit number of concurrent tasks executed by heartbeat. The task limit if
# disabled if set to 0. The default is 0. # disabled if set to 0. The default is 0.
@ -347,7 +382,7 @@ heartbeat.scheduler:
# Sets the write buffer size. # Sets the write buffer size.
#buffer_size: 1MiB #buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer # Maximum duration after which events are flushed if the write buffer
# is not full yet. The default value is 1s. # is not full yet. The default value is 1s.
#flush.timeout: 1s #flush.timeout: 1s
@ -361,7 +396,7 @@ heartbeat.scheduler:
#codec: cbor #codec: cbor
#read: #read:
# Reader flush timeout, waiting for more events to become available, so # Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs. # to fill a complete batch as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the # If flush_timeout is 0, all available events are forwarded to the
# outputs immediately. # outputs immediately.
# The default value is 0s. # The default value is 0s.
@ -515,12 +550,15 @@ processors:
# Set gzip compression level. # Set gzip compression level.
#compression_level: 0 #compression_level: 0
# Configure escaping HTML symbols in strings.
#escape_html: false
# Optional protocol and basic auth credentials. # Optional protocol and basic auth credentials.
#protocol: "https" #protocol: "https"
#username: "elastic" #username: "elastic"
#password: "changeme" #password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations. # Dictionary of HTTP parameters to pass within the URL with index operations.
#parameters: #parameters:
#param1: value1 #param1: value1
#param2: value2 #param2: value2
@ -531,19 +569,19 @@ processors:
# Optional index name. The default is "heartbeat" plus date # Optional index name. The default is "heartbeat" plus date
# and generates [heartbeat-]YYYY.MM.DD keys. # and generates [heartbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "heartbeat-%{[beat.version]}-%{+yyyy.MM.dd}" #index: "heartbeat-%{[agent.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used. # Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: "" #pipeline: ""
# Optional HTTP Path # Optional HTTP path
#path: "/elasticsearch" #path: "/elasticsearch"
# Custom HTTP headers to add to each request # Custom HTTP headers to add to each request
#headers: #headers:
# X-My-Header: Contents of the header # X-My-Header: Contents of the header
# Proxy server url # Proxy server URL
#proxy_url: http://proxy:3128 #proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If # The number of times a particular Elasticsearch index operation is attempted. If
@ -555,40 +593,50 @@ processors:
# The default is 50. # The default is 50.
#bulk_max_size: 50 #bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch. # The number of seconds to wait before trying to reconnect to Elasticsearch
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Elasticsearch after a network error. The default is 60s.
#backoff.max: 60s
# Configure HTTP request timeout before failing a request to Elasticsearch.
#timeout: 90 #timeout: 90
# Use SSL settings for HTTPS. # Use SSL settings for HTTPS.
#ssl.enabled: true #ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts # Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are # and certificates will be accepted. In this mode, SSL-based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is # susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications # List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" #ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key # Client certificate key
#ssl.key: "/etc/pki/client/cert.key" #ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key. # Optional passphrase for decrypting the certificate key.
#ssl.key_passphrase: '' #ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] # #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never. # never, once, and freely. Default is never.
@ -603,7 +651,7 @@ processors:
# Boolean flag to enable or disable the output module. # Boolean flag to enable or disable the output module.
#enabled: true #enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata. # The list of Kafka broker addresses from which to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published # The cluster metadata contain the actual Kafka brokers events are published
# to. # to.
#hosts: ["localhost:9092"] #hosts: ["localhost:9092"]
@ -612,7 +660,7 @@ processors:
# using any event field. To set the topic from document type use `%{[type]}`. # using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats #topic: beats
# The Kafka event key setting. Use format string to create unique event key. # The Kafka event key setting. Use format string to create a unique event key.
# By default no event key will be generated. # By default no event key will be generated.
#key: '' #key: ''
@ -633,28 +681,38 @@ processors:
#username: '' #username: ''
#password: '' #password: ''
# Kafka version heartbeat is assumed to run against. Defaults to the oldest # Kafka version heartbeat is assumed to run against. Defaults to the "1.0.0".
# supported stable version (currently version 0.8.2.0) #version: '1.0.0'
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information # Configure JSON encoding
# deciding which broker to use when publishing. #codec.json:
# Pretty-print JSON event
#pretty: false
# Configure escaping HTML symbols in strings.
#escape_html: false
# Metadata update configuration. Metadata contains leader information
# used to decide which broker to use when publishing.
#metadata: #metadata:
# Max metadata request retry attempts when cluster is in middle of leader # Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries. # election. Defaults to 3 retries.
#retry.max: 3 #retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms. # Wait time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms #retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes. # Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m #refresh_frequency: 10m
# Strategy for fetching the topics metadata from the broker. Default is true.
#full: true
# The number of concurrent load-balanced Kafka output workers. # The number of concurrent load-balanced Kafka output workers.
#worker: 1 #worker: 1
# The number of times to retry publishing an event after a publishing failure. # The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped. # After the specified number of retries, events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry # all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3. # until all events are published. The default is 3.
@ -683,6 +741,10 @@ processors:
# default is gzip. # default is gzip.
#compression: gzip #compression: gzip
# Set the compression level. Currently only gzip provides a compression level
# between 0 and 9. The default value is chosen by the compression algorithm.
#compression_level: 4
# The maximum permitted size of JSON-encoded messages. Bigger messages will be # The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to # dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes. # or less than the broker's message.max.bytes.
@ -698,7 +760,7 @@ processors:
# purposes. The default is "beats". # purposes. The default is "beats".
#client_id: beats #client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set. # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
#ssl.enabled: true #ssl.enabled: true
# Optional SSL configuration options. SSL is off by default. # Optional SSL configuration options. SSL is off by default.
@ -711,7 +773,7 @@ processors:
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
@ -727,7 +789,7 @@ processors:
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
@ -739,20 +801,24 @@ processors:
# Boolean flag to enable or disable the output module. # Boolean flag to enable or disable the output module.
#enabled: true #enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the # Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping HTML symbols in strings.
#escape_html: false
# The list of Redis servers to connect to. If load-balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes # events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only. # unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"] #hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The # The name of the Redis list or channel the events are published to. The
# default is heartbeat. # default is heartbeat.
#key: heartbeat #key: heartbeat
# The password to authenticate with. The default is no authentication. # The password to authenticate to Redis with. The default is no authentication.
#password: #password:
# The Redis database number where the events are published. The default is 0. # The Redis database number where the events are published. The default is 0.
@ -786,6 +852,17 @@ processors:
# until all events are published. The default is 3. # until all events are published. The default is 3.
#max_retries: 3 #max_retries: 3
# The number of seconds to wait before trying to reconnect to Redis
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Redis after a network error. The default is 60s.
#backoff.max: 60s
# The maximum number of events to bulk in a single Redis request or pipeline. # The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048. # The default is 2048.
#bulk_max_size: 2048 #bulk_max_size: 2048
@ -842,11 +919,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Path to the directory where to save the generated files. The option is # Path to the directory where to save the generated files. The option is
# mandatory. # mandatory.
@ -877,11 +954,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
#================================= Paths ====================================== #================================= Paths ======================================
@ -916,9 +993,27 @@ processors:
#============================== Dashboards ===================================== #============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('heartbeat') }} {{ elk_macros.setup_dashboards('heartbeat') }}
#=============================== Template ====================================== #============================== Template =====================================
{{ elk_macros.setup_template('heartbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }} {{ elk_macros.setup_template('heartbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Setup ILM =====================================
# Configure Index Lifecycle Management Index Lifecycle Management creates a
# write alias and adds additional settings to the template.
# The elasticsearch.output.index setting will be replaced with the write alias
# if ILM is enabled.
# Enabled ILM support. Valid values are true, false, and auto. The beat will
# detect availabilty of Index Lifecycle Management in Elasticsearch and enable
# or disable ILM support.
#setup.ilm.enabled: auto
# Configure the ILM write alias name.
#setup.ilm.rollover_alias: "heartbeat"
# Configure rollover index pattern.
#setup.ilm.pattern: "{now/d}-000001"
#============================== Kibana ===================================== #============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %} {% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }} {{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
@ -949,3 +1044,389 @@ processors:
# Enable or disable seccomp system call filtering on Linux. Default is enabled. # Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true #seccomp.enabled: true
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: false
################### Heartbeat Configuration Example #########################
# This file is a full configuration example documenting all non-deprecated
# options in comments. For a shorter configuration example, that contains
# only some common options, please see heartbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/heartbeat/index.html
############################# Heartbeat ######################################
{% set icmp_hosts = [] %}
{% for host_item in groups['all'] %}
{% if hostvars[host_item]['ansible_host'] is defined %}
{% set _ = icmp_hosts.extend([hostvars[host_item]['ansible_host']]) %}
{% endif %}
{% endfor %}
# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: false
# How often to check for changes
reload.period: 5s
# Configure monitors
heartbeat.monitors:
- type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping
# configured hosts
# Monitor name used for job name and document type.
name: icmp
# Enable/Disable monitor
enabled: true
# Configure task schedule using cron-like syntax
schedule: '*/5 * * * * * *' # exactly every 5 seconds like 10:00:00, 10:00:05, ...
# List of hosts to ping
hosts: {{ (icmp_hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: any
# Total running time per ping test.
timeout: {{ icmp_hosts | length }}s
# Waiting duration until another ICMP Echo Request is emitted.
wait: 1s
# The tags of the monitors are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# monitor output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
# heartbeat.config.monitors:
# Directory + glob pattern to search for configuration files
#path: /path/to/my/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
#reload.enabled: true
# How often to check for changes
#reload.period: 1s
{% for item in heartbeat_services %}
{% if item.type == 'tcp' %}
{% set hosts = [] %}
{% for port in item.ports | default([]) %}
{% for backend in item.group | default([]) %}
{% set backend_host = hostvars[backend]['ansible_host'] %}
{% set _ = hosts.extend([backend_host + ":" + (port | string)]) %}
{% endfor %}
{% endfor %}
{% if hosts | length > 0 %}
- type: tcp # monitor type `tcp`. Connect via TCP and optionally verify endpoint
# by sending/receiving a custom payload
# Monitor name used for job name and document type
name: "{{ item.name }}"
# Enable/Disable monitor
enabled: true
# Configure task schedule
schedule: '@every 5s' # every 5 seconds from start of beat
# configure hosts to ping.
# Entries can be:
# - plain host name or IP like `localhost`:
# Requires ports configs to be checked. If ssl is configured,
# a SSL/TLS based connection will be established. Otherwise plain tcp connection
# will be established
# - hostname + port like `localhost:12345`:
# Connect to port on given host. If ssl is configured,
# a SSL/TLS based connection will be established. Otherwise plain tcp connection
# will be established
# - full url syntax. `scheme://<host>:[port]`. The `<scheme>` can be one of
# `tcp`, `plain`, `ssl` and `tls`. If `tcp`, `plain` is configured, a plain
# tcp connection will be established, even if ssl is configured.
# Using `tls`/`ssl`, an SSL connection is established. If no ssl is configured,
# system defaults will be used (not supported on windows).
# If `port` is missing in url, the ports setting is required.
hosts: {{ (hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: any
# List of ports to ping if host does not contain a port number
# ports: [80, 9200, 5044]
# Total test connection and data exchange timeout
#timeout: 16s
# Optional payload string to send to remote and expected answer. If none is
# configured, the endpoint is expected to be up if connection attempt was
# successful. If only `send_string` is configured, any response will be
# accepted as ok. If only `receive_string` is configured, no payload will be
# send, but client expects to receive expected payload on connect.
#check:
#send: ''
#receive: ''
# SOCKS5 proxy url
# proxy_url: ''
# Resolve hostnames locally instead on SOCKS5 server:
#proxy_use_local_resolver: false
# TLS/SSL connection settings:
#ssl:
# Certificate Authorities
#certificate_authorities: ['']
# Required TLS protocols
#supported_protocols: ["TLSv1.0", "TLSv1.1", "TLSv1.2"]
{% endif %}
{% elif item.type == 'http' %}
{% set hosts = [] %}
{% for port in item.ports | default([]) %}
{% for backend in item.group | default([]) %}
{% set backend_host = hostvars[backend]['ansible_host'] %}
{% set _ = hosts.extend(["http://" + backend_host + ":" + (port | string) + item.path]) %}
{% endfor %}
{% endfor %}
{% if hosts | length > 0 %}
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
- type: http # monitor type `http`. Connect via HTTP an optionally verify response
# Monitor name used for job name and document type
name: "{{ item.name }}"
# Enable/Disable monitor
enabled: true
# Configure task schedule
schedule: '@every 5s' # every 5 seconds from start of beat
# Configure URLs to ping
urls: {{ (hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: any
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Optional HTTP proxy url.
#proxy_url: ''
# Total test connection and data exchange timeout
#timeout: 16s
# Optional Authentication Credentials
#username: ''
#password: ''
# TLS/SSL connection settings for use with HTTPS endpoint. If not configured
# system defaults will be used.
#ssl:
# Certificate Authorities
#certificate_authorities: ['']
# Required TLS protocols
#supported_protocols: ["TLSv1.0", "TLSv1.1", "TLSv1.2"]
# Request settings:
check.request:
# Configure HTTP method to use. Only 'HEAD', 'GET' and 'POST' methods are allowed.
method: "{{ item.method }}"
# Dictionary of additional HTTP headers to send:
headers:
User-agent: osa-heartbeat-healthcheck
# Optional request body content
#body:
# Expected response settings
{% if item.check_response is defined %}
check.response: {{ item.check_response }}
#check.response:
# Expected status code. If not configured or set to 0 any status code not
# being 404 is accepted.
#status: 0
# Required response headers.
#headers:
# Required response contents.
#body:
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
# Parses the body as JSON, then checks against the given condition expression
#json:
#- description: Explanation of what the check does
# condition:
# equals:
# myField: expectedValue
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
heartbeat.scheduler:
# Limit number of concurrent tasks executed by heartbeat. The task limit if
# disabled if set to 0. The default is 0.
#username: "beats_system"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the URL with index operations.
#parameters:
#param1: value1
#param2: value2
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# The number of seconds to wait before trying to reconnect to Elasticsearch
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Elasticsearch after a network error. The default is 60s.
#backoff.max: 60s
# Configure HTTP request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. The default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client certificate key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the certificate key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#metrics.period: 10s
#state.period: 1m
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066
#============================= Process Security ================================
# Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: false

View File

@ -20,12 +20,12 @@ journalbeat.inputs:
- paths: ["/var/log/journal"] - paths: ["/var/log/journal"]
# The number of seconds to wait before trying to read again from journals. # The number of seconds to wait before trying to read again from journals.
#backoff: 1s backoff: 10s
# The maximum number of seconds to wait before attempting to read again from journals. # The maximum number of seconds to wait before attempting to read again from journals.
#max_backoff: 60s max_backoff: 60s
# Position to start reading from journal. Valid values: head, tail, cursor # Position to start reading from journal. Valid values: head, tail, cursor
seek: cursor seek: head
# Fallback position if no cursor data is available. # Fallback position if no cursor data is available.
#cursor_seek_fallback: head #cursor_seek_fallback: head
@ -46,17 +46,11 @@ journalbeat:
# data path. # data path.
registry_file: registry registry_file: registry
# The number of seconds to wait before trying to read again from journals. #==================== Elasticsearch template setting ==========================
backoff: 10s setup.template.settings:
# The maximum number of seconds to wait before attempting to read again from journals. index.number_of_shards: 1
max_backoff: 60s #index.codec: best_compression
#_source.enabled: false
# Position to start reading from all journal. Possible values: head, tail, cursor
seek: head
# Exact matching for field values of events.
# Matching for nginx entries: "systemd.unit=nginx"
#matches: []
#================================ General ====================================== #================================ General ======================================
@ -143,7 +137,7 @@ tags:
# Sets the write buffer size. # Sets the write buffer size.
#buffer_size: 1MiB #buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer # Maximum duration after which events are flushed if the write buffer
# is not full yet. The default value is 1s. # is not full yet. The default value is 1s.
#flush.timeout: 1s #flush.timeout: 1s
@ -157,7 +151,7 @@ tags:
#codec: cbor #codec: cbor
#read: #read:
# Reader flush timeout, waiting for more events to become available, so # Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs. # to fill a complete batch as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the # If flush_timeout is 0, all available events are forwarded to the
# outputs immediately. # outputs immediately.
# The default value is 0s. # The default value is 0s.
@ -277,9 +271,9 @@ tags:
# max_depth: 1 # max_depth: 1
# target: "" # target: ""
# overwrite_keys: false # overwrite_keys: false
processors:
- add_host_metadata: ~
processors:
- add_host_metadata:
#============================= Elastic Cloud ================================== #============================= Elastic Cloud ==================================
# These settings simplify using journalbeat with the Elastic Cloud (https://cloud.elastic.co/). # These settings simplify using journalbeat with the Elastic Cloud (https://cloud.elastic.co/).
@ -295,8 +289,7 @@ processors:
#================================ Outputs ====================================== #================================ Outputs ======================================
# Configure what outputs to use when sending the data collected by the beat. # Configure what output to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------- #-------------------------- Elasticsearch output -------------------------------
#output.elasticsearch: #output.elasticsearch:
@ -309,23 +302,18 @@ processors:
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
#hosts: ["localhost:9200"] #hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
#ilm.rollover_alias: "journalbeat"
#ilm.pattern: "{now/d}-000001"
# Set gzip compression level. # Set gzip compression level.
#compression_level: 0 #compression_level: 0
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Optional protocol and basic auth credentials. # Optional protocol and basic auth credentials.
#protocol: "https" #protocol: "https"
#username: "elastic" #username: "elastic"
#password: "changeme" #password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations. # Dictionary of HTTP parameters to pass within the URL with index operations.
#parameters: #parameters:
#param1: value1 #param1: value1
#param2: value2 #param2: value2
@ -336,19 +324,19 @@ processors:
# Optional index name. The default is "journalbeat" plus date # Optional index name. The default is "journalbeat" plus date
# and generates [journalbeat-]YYYY.MM.DD keys. # and generates [journalbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "journalbeat-%{[beat.version]}-%{+yyyy.MM.dd}" #index: "journalbeat-%{[agent.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used. # Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: "" #pipeline: ""
# Optional HTTP Path # Optional HTTP path
#path: "/elasticsearch" #path: "/elasticsearch"
# Custom HTTP headers to add to each request # Custom HTTP headers to add to each request
#headers: #headers:
# X-My-Header: Contents of the header # X-My-Header: Contents of the header
# Proxy server url # Proxy server URL
#proxy_url: http://proxy:3128 #proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If # The number of times a particular Elasticsearch index operation is attempted. If
@ -371,45 +359,45 @@ processors:
# Elasticsearch after a network error. The default is 60s. # Elasticsearch after a network error. The default is 60s.
#backoff.max: 60s #backoff.max: 60s
# Configure http request timeout before failing a request to Elasticsearch. # Configure HTTP request timeout before failing a request to Elasticsearch.
#timeout: 90 #timeout: 90
# Use SSL settings for HTTPS. # Use SSL settings for HTTPS.
#ssl.enabled: true #ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts # Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are # and certificates will be accepted. In this mode, SSL-based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is # susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications # List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" #ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key # Client certificate key
#ssl.key: "/etc/pki/client/cert.key" #ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key. # Optional passphrase for decrypting the certificate key.
#ssl.key_passphrase: '' #ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never. # never, once, and freely. Default is never.
#ssl.renegotiation: never #ssl.renegotiation: never
#----------------------------- Logstash output --------------------------------- #----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count, 'journalbeat') }} {{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count, 'journalbeat') }}
@ -418,7 +406,7 @@ processors:
# Boolean flag to enable or disable the output module. # Boolean flag to enable or disable the output module.
#enabled: true #enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata. # The list of Kafka broker addresses from which to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published # The cluster metadata contain the actual Kafka brokers events are published
# to. # to.
#hosts: ["localhost:9092"] #hosts: ["localhost:9092"]
@ -427,7 +415,7 @@ processors:
# using any event field. To set the topic from document type use `%{[type]}`. # using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats #topic: beats
# The Kafka event key setting. Use format string to create unique event key. # The Kafka event key setting. Use format string to create a unique event key.
# By default no event key will be generated. # By default no event key will be generated.
#key: '' #key: ''
@ -453,30 +441,33 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Metadata update configuration. Metadata do contain leader information # Metadata update configuration. Metadata contains leader information
# deciding which broker to use when publishing. # used to decide which broker to use when publishing.
#metadata: #metadata:
# Max metadata request retry attempts when cluster is in middle of leader # Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries. # election. Defaults to 3 retries.
#retry.max: 3 #retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms. # Wait time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms #retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes. # Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m #refresh_frequency: 10m
# Strategy for fetching the topics metadata from the broker. Default is true.
#full: true
# The number of concurrent load-balanced Kafka output workers. # The number of concurrent load-balanced Kafka output workers.
#worker: 1 #worker: 1
# The number of times to retry publishing an event after a publishing failure. # The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped. # After the specified number of retries, events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry # all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3. # until all events are published. The default is 3.
@ -524,7 +515,7 @@ processors:
# purposes. The default is "beats". # purposes. The default is "beats".
#client_id: beats #client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set. # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
#ssl.enabled: true #ssl.enabled: true
# Optional SSL configuration options. SSL is off by default. # Optional SSL configuration options. SSL is off by default.
@ -537,7 +528,7 @@ processors:
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
@ -553,7 +544,7 @@ processors:
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
@ -570,23 +561,19 @@ processors:
# Pretty print json event # Pretty print json event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# The list of Redis servers to connect to. If load balancing is enabled, the # The list of Redis servers to connect to. If load-balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes # events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only. # unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"] #hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The # The name of the Redis list or channel the events are published to. The
# default is journalbeat. # default is journalbeat.
#key: journalbeat #key: journalbeat
# The password to authenticate with. The default is no authentication. # The password to authenticate to Redis with. The default is no authentication.
#password: #password:
# The Redis database number where the events are published. The default is 0. # The Redis database number where the events are published. The default is 0.
@ -687,11 +674,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Path to the directory where to save the generated files. The option is # Path to the directory where to save the generated files. The option is
# mandatory. # mandatory.
@ -722,11 +709,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
#================================= Paths ====================================== #================================= Paths ======================================
@ -761,9 +748,28 @@ processors:
#============================== Dashboards ===================================== #============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('journalbeat') }} {{ elk_macros.setup_dashboards('journalbeat') }}
#=============================== Template ====================================== #============================== Template =====================================
{{ elk_macros.setup_template('journalbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }} {{ elk_macros.setup_template('journalbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Setup ILM =====================================
# Configure Index Lifecycle Management Index Lifecycle Management creates a
# write alias and adds additional settings to the template.
# The elasticsearch.output.index setting will be replaced with the write alias
# if ILM is enabled.
# Enabled ILM support. Valid values are true, false, and auto. The beat will
# detect availabilty of Index Lifecycle Management in Elasticsearch and enable
# or disable ILM support.
#setup.ilm.enabled: auto
# Configure the ILM write alias name.
#setup.ilm.rollover_alias: "journalbeat"
# Configure rollover index pattern.
#setup.ilm.pattern: "{now/d}-000001"
#============================== Kibana ===================================== #============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %} {% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }} {{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
@ -772,7 +778,7 @@ processors:
#================================ Logging ====================================== #================================ Logging ======================================
{{ elk_macros.beat_logging('journalbeat') }} {{ elk_macros.beat_logging('journalbeat') }}
#============================== Xpack Monitoring =============================== #============================== Xpack Monitoring =====================================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }} {{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ====================================== #================================ HTTP Endpoint ======================================
@ -794,3 +800,8 @@ processors:
# Enable or disable seccomp system call filtering on Linux. Default is enabled. # Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true #seccomp.enabled: true
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: false

View File

@ -1,92 +1,125 @@
# Kibana is served by a back end server. This setting specifies the port to use. # Kibana is served by a back end server. This setting specifies the port to use.
server.port: {{ kibana_port }} server.port: {{ kibana_port }}
# This setting specifies the IP address of the back end server. # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
server.host: {{ kibana_interface }} # The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: {{ kibana_interface }}
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# cannot end in a slash. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# server.basePath: "" # from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests. # The maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576 #server.maxPayloadBytes: 1048576
# The URL of the Elasticsearch instance to use for all your queries. # The Kibana server's name. This is used for display purposes.
elasticsearch.url: "http://127.0.0.1:{{ elastic_port }}" #server.name: "your-hostname"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: "http://127.0.0.1:{{ elastic_port }}"
# When this setting's value is true Kibana uses the hostname specified in the server.host # When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host # setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance. # that connects to this Kibana instance.
# elasticsearch.preserveHost: true #elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist. # dashboards. Kibana creates a new index if the index doesn't already exist.
# kibana.index: ".kibana" #kibana.index: ".kibana"
# The default application to load. # The default application to load.
# kibana.defaultAppId: "discover" #kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide # If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana # the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server. # is proxied through the Kibana server.
# elasticsearch.username: "user" #elasticsearch.username: "user"
# elasticsearch.password: "pass" #elasticsearch.password: "pass"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# files enable SSL for outgoing requests from the Kibana server to the browser. # These settings enable SSL for outgoing requests from the Kibana server to the browser.
# server.ssl.cert: /path/to/your/server.crt #server.ssl.enabled: false
# server.ssl.key: /path/to/your/server.key #server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files. # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files. # These files validate that your Elasticsearch backend uses the same key files.
# elasticsearch.ssl.cert: /path/to/your/client.crt #elasticsearch.ssl.certificate: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key #elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate # Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance. # authority for your Elasticsearch instance.
# elasticsearch.ssl.ca: /path/to/your/CA.pem #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to false. # To disregard the validity of SSL certificates, change this setting's value to 'none'.
# elasticsearch.ssl.verify: true #elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting. # the elasticsearch.requestTimeout setting.
# elasticsearch.pingTimeout: 1500 #elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer. # must be a positive integer.
elasticsearch.requestTimeout: {{ kibana_elastic_request_timeout }} elasticsearch.requestTimeout: {{ kibana_elastic_request_timeout }}
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
# elasticsearch.shardTimeout: 0 #elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
# elasticsearch.startupTimeout: 5000 #elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file. # Specifies the path where Kibana creates the process ID file.
# pid.file: /var/run/kibana.pid #pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output. # Enables you specify a file where Kibana stores log output.
logging.dest: stdout logging.dest: stdout
# Set the value of this setting to true to suppress all logging output. # Set the value of this setting to true to suppress all logging output.
# logging.silent: false #logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages. # Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false #logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information # Set the value of this setting to true to log all events, including system usage information
# and all requests. # and all requests.
# logging.verbose: false #logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
#i18n.locale: "en"
# ---------------------------------- X-Pack ------------------------------------ # ---------------------------------- X-Pack ------------------------------------
# X-Pack Monitoring # X-Pack Monitoring
# https://www.elastic.co/guide/en/kibana/6.3/monitoring-settings-kb.html # https://www.elastic.co/guide/en/kibana/7.0/monitoring-settings-kb.html
xpack.monitoring.enabled: true xpack.monitoring.enabled: true
xpack.xpack_main.telemetry.enabled: false xpack.xpack_main.telemetry.enabled: false
xpack.monitoring.kibana.collection.enabled: true xpack.monitoring.kibana.collection.enabled: true
xpack.monitoring.kibana.collection.interval: 30000 xpack.monitoring.kibana.collection.interval: 30000
xpack.monitoring.min_interval_seconds: 30 xpack.monitoring.min_interval_seconds: 30
xpack.monitoring.ui.enabled: true xpack.monitoring.ui.enabled: true
xpack.monitoring.ui.container.elasticsearch.enabled: true xpack.monitoring.ui.container.elasticsearch.enabled: true

View File

@ -52,7 +52,7 @@ logstash_deploy_filters: true
# - server1.local:9092 # - server1.local:9092
# - server2.local:9092 # - server2.local:9092
# - server3.local:9092 # - server3.local:9092
# client_id: "elk_metrics_6x" # client_id: "elk_metrics_7x"
# compression_type: "gzip" # compression_type: "gzip"
# security_protocol: "SSL" # security_protocol: "SSL"

View File

@ -38,7 +38,6 @@ path.data: /var/lib/logstash
# #
# This defaults to the number of the host's CPU cores. # This defaults to the number of the host's CPU cores.
# #
{% set _d_processors = ((ansible_processor_count | int) * 3) %} {% set _d_processors = ((ansible_processor_count | int) * 3) %}
{% set _processors = ((_d_processors | int) > 0) | ternary(_d_processors, 2) %} {% set _processors = ((_d_processors | int) > 0) | ternary(_d_processors, 2) %}
{% set _t_processors = (_processors | int) + (ansible_processor_count | int) %} {% set _t_processors = (_processors | int) + (ansible_processor_count | int) %}
@ -225,14 +224,15 @@ path.logs: /var/log/logstash
# Where to find custom plugins # Where to find custom plugins
# path.plugins: [] # path.plugins: []
# #
# ---------------------------------- X-Pack ------------------------------------ # ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring # X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html # https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system #xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password #xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.url: ["127.0.0.1:9200"] xpack.monitoring.elasticsearch.hosts: ["http://127.0.0.1:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ] #xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file #xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password #xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file #xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
@ -242,18 +242,19 @@ xpack.monitoring.elasticsearch.sniffing: {{ elastic_sniffing_enabled | default(f
xpack.monitoring.collection.interval: 30s xpack.monitoring.collection.interval: 30s
xpack.monitoring.collection.pipeline.details.enabled: true xpack.monitoring.collection.pipeline.details.enabled: true
# #
# ------------ X-Pack Settings (not applicable for OSS build)--------------
# X-Pack Management # X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html # https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false #xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"] #xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user #xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password #xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.url: ["https://es1:9200", "https://es2:9200"] #xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ] #xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file #xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password #xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file #xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password #xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.sniffing: {{ elastic_sniffing_enabled | default(false) }} #xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s #xpack.management.logstash.poll_interval: 5s

View File

@ -25,13 +25,13 @@ packetbeat.interfaces.type: af_packet
# large enough for almost all networks and interface types. If you sniff on a # large enough for almost all networks and interface types. If you sniff on a
# physical network interface, the optimal setting is the MTU size. On virtual # physical network interface, the optimal setting is the MTU size. On virtual
# interfaces, however, it's safer to accept the default value. # interfaces, however, it's safer to accept the default value.
packetbeat.interfaces.snaplen: 65535 #packetbeat.interfaces.snaplen: 65535
# The maximum size of the shared memory buffer to use between the kernel and # The maximum size of the shared memory buffer to use between the kernel and
# user space. A bigger buffer usually results in lower CPU usage, but consumes # user space. A bigger buffer usually results in lower CPU usage, but consumes
# more memory. This setting is only available for the af_packet sniffer type. # more memory. This setting is only available for the af_packet sniffer type.
# The default is 30 MB. # The default is 30 MB.
packetbeat.interfaces.buffer_size_mb: 30 #packetbeat.interfaces.buffer_size_mb: 30
# Packetbeat automatically generates a BPF for capturing only the traffic on # Packetbeat automatically generates a BPF for capturing only the traffic on
# ports where it expects to find known protocols. Use this settings to tell # ports where it expects to find known protocols. Use this settings to tell
@ -99,8 +99,6 @@ packetbeat.protocols:
#transaction_timeout: 10s #transaction_timeout: 10s
- type: cassandra - type: cassandra
# Enable cassandra monitoring. Default: false
enabled: false
#Cassandra port for traffic monitoring. #Cassandra port for traffic monitoring.
ports: [9042] ports: [9042]
@ -134,7 +132,7 @@ packetbeat.protocols:
- type: dns - type: dns
# Enable DNS monitoring. Default: true # Enable DNS monitoring. Default: true
enabled: true #enabled: true
# Configure the ports where to listen for DNS traffic. You can disable # Configure the ports where to listen for DNS traffic. You can disable
# the DNS protocol by commenting out the list of ports. # the DNS protocol by commenting out the list of ports.
@ -164,6 +162,7 @@ packetbeat.protocols:
- type: http - type: http
# Enable HTTP monitoring. Default: true # Enable HTTP monitoring. Default: true
#enabled: true
{% set used_ports = [53, 443, 2049, 3306, 5432, 5672, 6379, 9042, 9090, 11211, 27017] %} {% set used_ports = [53, 443, 2049, 3306, 5432, 5672, 6379, 9042, 9090, 11211, 27017] %}
{% set ports = [] %} {% set ports = [] %}
{% for item in heartbeat_services %} {% for item in heartbeat_services %}
@ -173,7 +172,6 @@ packetbeat.protocols:
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}
enabled: true
# Configure the ports where to listen for HTTP traffic. You can disable # Configure the ports where to listen for HTTP traffic. You can disable
# the HTTP protocol by commenting out the list of ports. # the HTTP protocol by commenting out the list of ports.
@ -196,9 +194,22 @@ packetbeat.protocols:
send_all_headers: true send_all_headers: true
# The list of content types for which Packetbeat includes the full HTTP # The list of content types for which Packetbeat includes the full HTTP
# payload in the response field. # payload. If the request's or response's Content-Type matches any on this
# list, the full body will be included under the request or response field.
#include_body_for: [] #include_body_for: []
# The list of content types for which Packetbeat includes the full HTTP
# request payload.
#include_request_body_for: []
# The list of content types for which Packetbeat includes the full HTTP
# response payload.
#include_response_body_for: []
# Whether the body of a request must be decoded when a content-encoding
# or transfer-encoding has been applied.
#decode_body: true
# If the Cookie or Set-Cookie headers are sent, this option controls whether # If the Cookie or Set-Cookie headers are sent, this option controls whether
# they are split into individual values. # they are split into individual values.
#split_cookie: false #split_cookie: false
@ -226,7 +237,7 @@ packetbeat.protocols:
- type: memcache - type: memcache
# Enable memcache monitoring. Default: true # Enable memcache monitoring. Default: true
enabled: true #enabled: true
# Configure the ports where to listen for memcache traffic. You can disable # Configure the ports where to listen for memcache traffic. You can disable
# the Memcache protocol by commenting out the list of ports. # the Memcache protocol by commenting out the list of ports.
@ -275,11 +286,11 @@ packetbeat.protocols:
- type: mysql - type: mysql
# Enable mysql monitoring. Default: true # Enable mysql monitoring. Default: true
enabled: true #enabled: true
# Configure the ports where to listen for MySQL traffic. You can disable # Configure the ports where to listen for MySQL traffic. You can disable
# the MySQL protocol by commenting out the list of ports. # the MySQL protocol by commenting out the list of ports.
ports: [3306] ports: [3306,3307]
# If this option is enabled, the raw message of the request (`request` field) # If this option is enabled, the raw message of the request (`request` field)
# is sent to Elasticsearch. The default is false. # is sent to Elasticsearch. The default is false.
@ -440,15 +451,26 @@ packetbeat.protocols:
- type: tls - type: tls
# Enable TLS monitoring. Default: true # Enable TLS monitoring. Default: true
enabled: true #enabled: true
# Configure the ports where to listen for TLS traffic. You can disable # Configure the ports where to listen for TLS traffic. You can disable
# the TLS protocol by commenting out the list of ports. # the TLS protocol by commenting out the list of ports.
ports: [443] ports:
- 443 # HTTPS
- 993 # IMAPS
- 995 # POP3S
- 5223 # XMPP over SSL
- 8443
- 8883 # Secure MQTT
- 9243 # Elasticsearch
# List of hash algorithms to use to calculate certificates' fingerprints.
# Valid values are `sha1`, `sha256` and `md5`.
#fingerprints: [sha1]
# If this option is enabled, the client and server certificates and # If this option is enabled, the client and server certificates and
# certificate chains are sent to Elasticsearch. The default is true. # certificate chains are sent to Elasticsearch. The default is true.
send_certificates: true #send_certificates: true
# If this option is enabled, the raw certificates will be stored # If this option is enabled, the raw certificates will be stored
# in PEM format under the `raw` key. The default is false. # in PEM format under the `raw` key. The default is false.
@ -456,33 +478,17 @@ packetbeat.protocols:
#=========================== Monitored processes ============================== #=========================== Monitored processes ==============================
# Configure the processes to be monitored and how to find them. If a process is # Packetbeat can enrich events with information about the process associated
# monitored then Packetbeat attempts to use it's name to fill in the `proc` and # the socket that sent or received the packet if Packetbeat is monitoring
# `client_proc` fields. # traffic from the host machine. By default process enrichment is disabled.
# The processes can be found by searching their command line by a given string. # This feature works on Linux and Windows.
# packetbeat.procs.enabled: false
# Process matching is optional and can be enabled by uncommenting the following
# lines.
#
#packetbeat.procs:
# enabled: false
# monitored:
# - process: mysqld
# cmdline_grep: mysqld
#
# - process: pgsql
# cmdline_grep: postgres
#
# - process: nginx
# cmdline_grep: nginx
#
# - process: app
# cmdline_grep: gunicorn
# Uncomment the following if you want to ignore transactions created # If you want to ignore transactions created by the server on which the shipper
# by the server on which the shipper is installed. This option is useful # is installed you can enable this option. This option is useful to remove
# to remove duplicates if shippers are installed on multiple servers. # duplicates if shippers are installed on multiple servers. Default value is
#packetbeat.ignore_outgoing: true # false.
packetbeat.ignore_outgoing: false
#================================ General ====================================== #================================ General ======================================
@ -568,7 +574,7 @@ packetbeat.protocols:
# Sets the write buffer size. # Sets the write buffer size.
#buffer_size: 1MiB #buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer # Maximum duration after which events are flushed if the write buffer
# is not full yet. The default value is 1s. # is not full yet. The default value is 1s.
#flush.timeout: 1s #flush.timeout: 1s
@ -582,7 +588,7 @@ packetbeat.protocols:
#codec: cbor #codec: cbor
#read: #read:
# Reader flush timeout, waiting for more events to become available, so # Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs. # to fill a complete batch as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the # If flush_timeout is 0, all available events are forwarded to the
# outputs immediately. # outputs immediately.
# The default value is 0s. # The default value is 0s.
@ -736,12 +742,15 @@ processors:
# # Set gzip compression level. # # Set gzip compression level.
# #compression_level: 0 # #compression_level: 0
# #
# # Configure escaping HTML symbols in strings.
# #escape_html: false
#
# # Optional protocol and basic auth credentials. # # Optional protocol and basic auth credentials.
# #protocol: "https" # #protocol: "https"
# #username: "elastic" # #username: "elastic"
# #password: "changeme" # #password: "changeme"
# #
# # Dictionary of HTTP parameters to pass within the url with index operations. # # Dictionary of HTTP parameters to pass within the URL with index operations.
# #parameters: # #parameters:
# #param1: value1 # #param1: value1
# #param2: value2 # #param2: value2
@ -752,19 +761,19 @@ processors:
# # Optional index name. The default is "packetbeat" plus date # # Optional index name. The default is "packetbeat" plus date
# # and generates [packetbeat-]YYYY.MM.DD keys. # # and generates [packetbeat-]YYYY.MM.DD keys.
# # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. # # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
# #index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd}" # #index: "packetbeat-%{[agent.version]}-%{+yyyy.MM.dd}"
# #
# # Optional ingest node pipeline. By default no pipeline will be used. # # Optional ingest node pipeline. By default no pipeline will be used.
# #pipeline: "" # #pipeline: ""
# #
# # Optional HTTP Path # # Optional HTTP path
# #path: "/elasticsearch" # #path: "/elasticsearch"
# #
# # Custom HTTP headers to add to each request # # Custom HTTP headers to add to each request
# #headers: # #headers:
# # X-My-Header: Contents of the header # # X-My-Header: Contents of the header
# #
# # Proxy server url # # Proxy server URL
# #proxy_url: http://proxy:3128 # #proxy_url: http://proxy:3128
# #
# # The number of times a particular Elasticsearch index operation is attempted. If # # The number of times a particular Elasticsearch index operation is attempted. If
@ -776,55 +785,64 @@ processors:
# # The default is 50. # # The default is 50.
# #bulk_max_size: 50 # #bulk_max_size: 50
# #
# # Configure http request timeout before failing an request to Elasticsearch. # # The number of seconds to wait before trying to reconnect to Elasticsearch
# # after a network error. After waiting backoff.init seconds, the Beat
# # tries to reconnect. If the attempt fails, the backoff timer is increased
# # exponentially up to backoff.max. After a successful connection, the backoff
# # timer is reset. The default is 1s.
# #backoff.init: 1s
#
# # The maximum number of seconds to wait before attempting to connect to
# # Elasticsearch after a network error. The default is 60s.
# #backoff.max: 60s
#
# # Configure HTTP request timeout before failing a request to Elasticsearch.
# #timeout: 90 # #timeout: 90
# #
# # Use SSL settings for HTTPS. # # Use SSL settings for HTTPS.
# #ssl.enabled: true # #ssl.enabled: true
# #
# # Configure SSL verification mode. If `none` is configured, all server hosts # # Configure SSL verification mode. If `none` is configured, all server hosts
# # and certificates will be accepted. In this mode, SSL based connections are # # and certificates will be accepted. In this mode, SSL-based connections are
# # susceptible to man-in-the-middle attacks. Use only for testing. Default is # # susceptible to man-in-the-middle attacks. Use only for testing. Default is
# # `full`. # # `full`.
# #ssl.verification_mode: full # #ssl.verification_mode: full
# #
# # List of supported/valid TLS versions. By default all TLS versions 1.0 up to # # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# # 1.2 are enabled. # # 1.2 are enabled.
# #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] # #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# #
# # SSL configuration. By default is off.
# # List of root certificates for HTTPS server verifications # # List of root certificates for HTTPS server verifications
# #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# #
# # Certificate for SSL client authentication # # Certificate for SSL client authentication
# #ssl.certificate: "/etc/pki/client/cert.pem" # #ssl.certificate: "/etc/pki/client/cert.pem"
# #
# # Client Certificate Key # # Client certificate key
# #ssl.key: "/etc/pki/client/cert.key" # #ssl.key: "/etc/pki/client/cert.key"
# #
# # Optional passphrase for decrypting the Certificate Key. # # Optional passphrase for decrypting the certificate key.
# #ssl.key_passphrase: '' # #ssl.key_passphrase: ''
# #
# # Configure cipher suites to be used for SSL connections # # Configure cipher suites to be used for SSL connections
# #ssl.cipher_suites: [] # #ssl.cipher_suites: []
# #
# # Configure curve types for ECDHE based cipher suites # # Configure curve types for ECDHE-based cipher suites
# #ssl.curve_types: [] # #ssl.curve_types: []
# #
# # Configure what types of renegotiation are supported. Valid options are # # Configure what types of renegotiation are supported. Valid options are
# # never, once, and freely. Default is never. # # never, once, and freely. Default is never.
# #ssl.renegotiation: never # #ssl.renegotiation: never
#
#----------------------------- Logstash output --------------------------------- #----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count) }} {{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count) }}
#------------------------------- Kafka output ---------------------------------- #------------------------------- Kafka output ----------------------------------
#output.kafka: #output.kafka:
# Boolean flag to enable or disable the output module. # Boolean flag to enable or disable the output module.
#enabled: true #enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata. # The list of Kafka broker addresses from which to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published # The cluster metadata contain the actual Kafka brokers events are published
# to. # to.
#hosts: ["localhost:9092"] #hosts: ["localhost:9092"]
@ -833,7 +851,7 @@ processors:
# using any event field. To set the topic from document type use `%{[type]}`. # using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats #topic: beats
# The Kafka event key setting. Use format string to create unique event key. # The Kafka event key setting. Use format string to create a unique event key.
# By default no event key will be generated. # By default no event key will be generated.
#key: '' #key: ''
@ -859,30 +877,33 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Metadata update configuration. Metadata do contain leader information # Metadata update configuration. Metadata contains leader information
# deciding which broker to use when publishing. # used to decide which broker to use when publishing.
#metadata: #metadata:
# Max metadata request retry attempts when cluster is in middle of leader # Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries. # election. Defaults to 3 retries.
#retry.max: 3 #retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms. # Wait time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms #retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes. # Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m #refresh_frequency: 10m
# Strategy for fetching the topics metadata from the broker. Default is true.
#full: true
# The number of concurrent load-balanced Kafka output workers. # The number of concurrent load-balanced Kafka output workers.
#worker: 1 #worker: 1
# The number of times to retry publishing an event after a publishing failure. # The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped. # After the specified number of retries, events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry # all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3. # until all events are published. The default is 3.
@ -930,7 +951,7 @@ processors:
# purposes. The default is "beats". # purposes. The default is "beats".
#client_id: beats #client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set. # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
#ssl.enabled: true #ssl.enabled: true
# Optional SSL configuration options. SSL is off by default. # Optional SSL configuration options. SSL is off by default.
@ -943,7 +964,7 @@ processors:
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
@ -959,7 +980,7 @@ processors:
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
@ -976,23 +997,19 @@ processors:
# Pretty print json event # Pretty print json event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# The list of Redis servers to connect to. If load balancing is enabled, the # The list of Redis servers to connect to. If load-balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes # events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only. # unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"] #hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The # The name of the Redis list or channel the events are published to. The
# default is packetbeat. # default is packetbeat.
#key: packetbeat #key: packetbeat
# The password to authenticate with. The default is no authentication. # The password to authenticate to Redis with. The default is no authentication.
#password: #password:
# The Redis database number where the events are published. The default is 0. # The Redis database number where the events are published. The default is 0.
@ -1093,11 +1110,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
# Path to the directory where to save the generated files. The option is # Path to the directory where to save the generated files. The option is
# mandatory. # mandatory.
@ -1128,11 +1145,11 @@ processors:
# Configure JSON encoding # Configure JSON encoding
#codec.json: #codec.json:
# Pretty print json event # Pretty-print JSON event
#pretty: false #pretty: false
# Configure escaping html symbols in strings. # Configure escaping HTML symbols in strings.
#escape_html: true #escape_html: false
#================================= Paths ====================================== #================================= Paths ======================================
@ -1167,10 +1184,29 @@ processors:
#============================== Dashboards ===================================== #============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('packetbeat') }} {{ elk_macros.setup_dashboards('packetbeat') }}
#=============================== Template ====================================== #============================== Template =====================================
{{ elk_macros.setup_template('packetbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }} {{ elk_macros.setup_template('packetbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#================================ Kibana ======================================= #============================== Setup ILM =====================================
# Configure Index Lifecycle Management Index Lifecycle Management creates a
# write alias and adds additional settings to the template.
# The elasticsearch.output.index setting will be replaced with the write alias
# if ILM is enabled.
# Enabled ILM support. Valid values are true, false, and auto. The beat will
# detect availabilty of Index Lifecycle Management in Elasticsearch and enable
# or disable ILM support.
#setup.ilm.enabled: auto
# Configure the ILM write alias name.
#setup.ilm.rollover_alias: "packetbeat"
# Configure rollover index pattern.
#setup.ilm.pattern: "{now/d}-000001"
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %} {% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }} {{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %} {% endif %}
@ -1178,7 +1214,7 @@ processors:
#================================ Logging ====================================== #================================ Logging ======================================
{{ elk_macros.beat_logging('packetbeat') }} {{ elk_macros.beat_logging('packetbeat') }}
#============================== Xpack Monitoring =============================== #============================== Xpack Monitoring =====================================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }} {{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ====================================== #================================ HTTP Endpoint ======================================
@ -1200,3 +1236,8 @@ processors:
# Enable or disable seccomp system call filtering on Linux. Default is enabled. # Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true #seccomp.enabled: true
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: false

View File

@ -18,6 +18,6 @@ elastic_repo_distro_packages:
# elk apt repo # elk apt repo
elastic_repo: elastic_repo:
repo: 'deb https://artifacts.elastic.co/packages/6.x/apt stable main' repo: 'deb https://artifacts.elastic.co/packages/7.x/apt stable main'
state: "{{ ((elk_package_state | default('present')) == 'absent') | ternary('absent', 'present') }}" state: "{{ ((elk_package_state | default('present')) == 'absent') | ternary('absent', 'present') }}"
key_url: "https://artifacts.elastic.co/GPG-KEY-elasticsearch" key_url: "https://artifacts.elastic.co/GPG-KEY-elasticsearch"

View File

@ -1,118 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
elastic_index_retention_algorithm: default
### Elastic curator variables
## If any of these retention policy option are undefined a dynamic fact will be
## generated.
## These options are all in days.
# elastic_logstash_retention: 1
# elastic_apm_retention: 1
# elastic_auditbeat_retention: 1
# elastic_filebeat_retention: 1
# elastic_heartbeat_retention: 1
# elastic_journalbeat_retention: 1
# elastic_metricbeat_retention: 1
# elastic_packetbeat_retention: 1
# elastic_skydive_retention: 1
## These options are all in megabytes.
# elastic_logstash_size: 1024
# elastic_apm_size: 1024
# elastic_auditbeat_size: 1024
# elastic_filebeat_size: 1024
# elastic_heartbeat_size: 1024
# elastic_journalbeat_size: 1024
# elastic_metricbeat_size: 1024
# elastic_packetbeat_size: 1024
# elastic_skydive_size: 1024
## WHen a static retention policy option is not defined these options will be
## used for dynamic fact generation.
##
## Facts will be generated for the general retention using the total available
## storage from the ES data nodes, subtracting 25%. Using the weights, each
## index will be given a percentage of the total available storage. Indexes with
## higher weights are expected to use more storage. The list of hosts in a given
## index will be used to determine the number of days data can exist within an
## index before it's pruned.
## Example:
# es cluster has 4TiB of storage
# filebeat is deployed to 100 hosts
# filebeat has a weight of 10
# metricbeat is deployed to 125 hosts
# metricbeat has a weight of 2
#
# es storage in MiB: 4194304
# hosts and weighting total: (100 + 125) x (10 + 2) = 2700
# filebeat pct: (100 x 10) / 2700 = 0.37
# filebeat storage allowed: 0.37 * 4194304 = 1551892.48 MiB
# filebeat days allowed: 1551892.48 / (100 * 1024) = 15.1552 Days
# filebeat result: 15 days of retention or 1.5TiB of storage, whatever comes first
# metricbeat pct: (125 x 2) / 2700 = 0.09
# metricbeat storage allowed: 0.09 * 4194304 = 377487.36 MiB
# metricbeat days allowed: 377487.36 / (125 * 1024) = 2.94912 Days
# metricbeat result: 2 days of retention or 38GiB of storage, whatever comes first
elastic_beat_retention_policy_hosts:
logstash:
make_index: true
weight: 1
hosts: "{{ groups['elastic-logstash'] | default([]) }}"
apm:
make_index: true
timeFieldName: '@timestamp'
weight: 1
hosts: "{{ groups['apm-server'] | default([]) }}"
auditbeat:
timeFieldName: '@timestamp'
weight: 10
hosts: "{{ groups['hosts'] | default([]) }}"
filebeat:
timeFieldName: '@timestamp'
weight: 10
hosts: "{{ groups['hosts'] | default([]) }}"
syslog:
make_index: true
weight: 1
hosts: "{{ groups['hosts'] | default([]) }}"
heartbeat:
timeFieldName: '@timestamp'
weight: 1
hosts: "{{ groups['kibana'][:3] | default([]) }}"
journalbeat:
timeFieldName: '@timestamp'
weight: 3
hosts: "{{ groups['hosts'] | default([]) }}"
metricbeat:
timeFieldName: '@timestamp'
weight: 2
hosts: "{{ groups['all'] | default([]) }}"
packetbeat:
timeFieldName: '@timestamp'
weight: 1
hosts: "{{ groups['hosts'] | default([]) }}"
monitorstack:
timeFieldName: '@timestamp'
weight: 1
hosts: "{{ (groups['nova_compute'] | default([])) | union((groups['utility_all'] | default([]))) | union((groups['memcached_all'] | default([]))) }}"
skydive:
weight: 1
hosts: "{{ (((groups['skydive_analyzers'] | default([])) | length) > 0) | ternary((groups['hosts'] | default([])), []) }}"
# Refresh the elasticsearch retention policy local facts.
elastic_retention_refresh: false

View File

@ -1,34 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x retention role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts

View File

@ -1,104 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Ensure local facts directory exists
file:
dest: "/etc/ansible/facts.d"
state: directory
group: "root"
owner: "root"
mode: "0755"
recurse: no
- name: Initialize local facts
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "retention"
option: cacheable
value: true
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
- name: Retention storage block
block:
- name: Query es storage
uri:
url: "http://{{ coordination_nodes[0] }}/_nodes/{{ (data_nodes | map('extract', hostvars, 'ansible_host') | list) | join(',') }}/stats/fs"
method: GET
register: elk_data
environment:
no_proxy: "{{ coordination_nodes[0].split(':')[0] }}"
until:
- elk_data is success and elk_data['json'] is defined
retries: 5
delay: 30
run_once: true
- name: Set retention keys fact
set_fact:
es_storage_json: "{{ elk_data['json'] }}"
- name: Load retention algo variables
include_vars: "calculate_index_retention_{{ elastic_index_retention_algorithm }}.yml"
tags:
- always
- name: Set storage fact
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "retention"
option: "cluster_nodes"
value: "{{ groups['elastic-logstash'] | length }}"
- name: Set retention policy keys fact
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "retention"
option: "elastic_beat_retention_policy_keys"
value: "{{ elastic_beat_retention_policy_hosts.keys() | list | sort }}"
- name: Set size fact
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "retention"
option: "elastic_{{ item.key }}_size"
value: "{{ item.value }}"
with_dict: "{{ es_storage_per_index }}"
- name: Set retention fact
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "retention"
option: "elastic_{{ item.key }}_retention"
value: "{{ item.value }}"
with_dict: "{{ es_days_per_index }}"
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
when:
- (ansible_local['elastic']['retention']['cluster_nodes'] is undefined) or
((groups['elastic-logstash'] | length) != (ansible_local['elastic']['retention']['cluster_nodes'] | int)) or
((ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] is defined) and
((ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] | from_yaml) != (elastic_beat_retention_policy_hosts.keys() | list | sort))) or
(elastic_retention_refresh | bool)

View File

@ -1,58 +0,0 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set available storage fact. This tasks the total amount of storage found
# within the data nodes of the elasticsearch cluster and converts bytes to
# megabytes.
es_total_available_storage: "{{ ((es_storage_json['nodes'].values() | list) | map(attribute='fs.total.total_in_bytes') | list | sum) // 1024 // 1024 }}"
# Set assumed buffer storage fact. This will result in 25% of the total
# available storage.
es_assumed_buffer_storage: "{{ ((es_total_available_storage | int) * 0.25) | round | int }}"
# Set usable buffer storage fact(s). This is the toal storage minus the buffer.
es_usable_buffer_storage: "{{ (es_total_available_storage | int) - (es_assumed_buffer_storage | int) }}"
# This function will take the sum total of all hosts in the retention policy
# after weighting. Once the policy is set the sum total will be carved up into
# individual percentages of the total amount of usable storage after the buffer
# is calculated.
es_storage_per_index: |-
{%- set es_hash = {} %}
{%- set total_weight = (elastic_beat_retention_policy_hosts.values() | list | map(attribute='weight') | list | sum) %}
{%- set host_count = (elastic_beat_retention_policy_hosts.values() | list | map(attribute='hosts') | list | map('flatten') | list | length) %}
{%- set total_values = (total_weight | int) * (host_count | int) %}
{%- for key, value in elastic_beat_retention_policy_hosts.items() %}
{%- set value_pct = (((value.weight | int) * (value.hosts | length)) / (total_values | int)) %}
{%- set value_total = ((value_pct | float) * (es_usable_buffer_storage | int)) %}
{%- set _ = es_hash.__setitem__(key, value_total | int) %}
{%- endfor %}
{{ es_hash }}
# The assumed number of days an index will be retained is based on the size of
# the given index. With the sizes all figured out in the function above this
# function will divide each retention size be a constant of 1024 and the number
# of hosts within a given collector segment.
es_days_per_index: |-
{%- set es_hash = {} %}
{%- for key, value in elastic_beat_retention_policy_hosts.items() %}
{%- if (es_storage_per_index[key] | int) > 0 %}
{%- set value_days = ((es_storage_per_index[key] | int) // ((value.hosts | length) * 1024)) %}
{%- set _ = es_hash.__setitem__(key, ((value_days | int) > 0) | ternary(value_days, 1) ) %}
{%- else %}
{%- set _ = es_hash.__setitem__(key, 1) %}
{%- endif %}
{%- endfor %}
{{ es_hash }}

View File

@ -14,3 +14,4 @@
# limitations under the License. # limitations under the License.
elastic_allow_rollup_purge: false elastic_allow_rollup_purge: false
days_until_rollup: 15

View File

@ -30,5 +30,3 @@ galaxy_info:
- development - development
- elasticsearch - elasticsearch
- elastic-stack - elastic-stack
dependencies:
- role: elastic_retention

View File

@ -40,21 +40,6 @@
- name: Create rollup block - name: Create rollup block
block: block:
- name: Set min retention days fact
set_fact:
min_days_until_rollup: |-
{% set index_retention = [] %}
{% for item in ansible_play_hosts %}
{% set _ = index_retention.append(ansible_local['elastic']['retention']['elastic_' + index_name + '_retention'] | int) %}
{% endfor %}
{{ index_retention | min }}
run_once: true
- name: Set retention days fact
set_fact:
days_until_rollup: "{{ ((min_days_until_rollup | int) > 1) | ternary(((min_days_until_rollup | int) - 1), min_days_until_rollup) }}"
run_once: true
- name: Create rollup job - name: Create rollup job
uri: uri:
url: "{{ item.url }}" url: "{{ item.url }}"

View File

@ -1,74 +1,102 @@
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster ----------------------------------- # ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: {{ cluster_name }} cluster.name: {{ cluster_name }}
#
# ------------------------------------ Node ------------------------------------ # ------------------------------------ Node ------------------------------------
node.name: {{ ansible_nodename }} #
# node.rack: r1 # Use a descriptive name for the node:
#
# ansible_nodename may be appropriate for your instance
# If you're having issues with bootstrap skipping, check this.
node.name: {{ inventory_hostname }}
#
# Add custom attributes to the node:
# Set to true to enable machine learning on the node. # Set to true to enable machine learning on the node.
node.ml: false node.ml: false
#
# ----------------------------------- Paths ------------------------------------ # ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma): # Path to directory where to store the data (separate multiple locations by comma):
# #
# path.data: /path/to/data
path.data: /var/lib/elasticsearch path.data: /var/lib/elasticsearch
# #
# Path to log files: # Path to log files:
# #
#
# Path to log files:
#
# path.logs: /path/to/logs
#path.logs: /var/lib/elasticsearch/logs/
path.logs: /var/log/elasticsearch/ path.logs: /var/log/elasticsearch/
# #
# Path to shared filesystem repos # Path to shared filesystem repos
# #
# path.repo: ["/mount/backups", "/mount/longterm_backups"]
#
{% if elastic_shared_fs_repos is defined and elastic_shared_fs_repos|length > 0 %} {% if elastic_shared_fs_repos is defined and elastic_shared_fs_repos|length > 0 %}
path.repo: {{ elastic_shared_fs_repos | json_query("[*].path") | to_json }} path.repo: {{ elastic_shared_fs_repos | json_query("[*].path") | to_json }}
{% endif %} {% endif %}
#
# Set the global default index store. More information on these settings can be # Set the global default index store. More information on these settings can be
# found here: # found here:
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-store.html> # <https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-store.html>
#
index.store.type: niofs index.store.type: niofs
#
# ----------------------------------- Memory ----------------------------------- # ----------------------------------- Memory -----------------------------------
# #
# Lock the memory on startup: # Lock the memory on startup:
# #
bootstrap.memory_lock: {{ elastic_memory_lock }} bootstrap.memory_lock: {{ elastic_memory_lock }}
# #
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory # Make sure that the heap size is set to about half the memory available
# available on the system and that the owner of the process is allowed to use this limit. # on the system and that the owner of the process is allowed to use this
# limit.
# #
# Elasticsearch performs poorly when the system is swapping the memory. # Elasticsearch performs poorly when the system is swapping the memory.
# #
# ---------------------------------- Network ----------------------------------- # ---------------------------------- Network -----------------------------------
# #
# Set the bind address to a specific IP (IPv4 or IPv6): # Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: ["127.0.0.1", "{{ ansible_host }}", "{{ ansible_hostname }}"] network.host: ["127.0.0.1", "{{ ansible_host }}", "{{ ansible_hostname }}"]
{% if elasticsearch_publish_host is defined %} {% if elasticsearch_publish_host is defined %}
network.publish_host: "{{ elasticsearch_publish_host }}" network.publish_host: "{{ elasticsearch_publish_host }}"
{% endif %} {% endif %}
#
# Set a custom port for HTTP: # Set a custom port for HTTP:
#
http.port: {{ elastic_port }} http.port: {{ elastic_port }}
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ---------------------------------- # --------------------------------- Discovery ----------------------------------
# #
# Pass an initial list of hosts to perform discovery when new node is started: # Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"] # The default list of hosts is ["127.0.0.1", "[::1]"]
# #
# Node definitions can be seen here: discovery.seed_hosts: {{ zen_nodes | to_json }}
#<https://www.elastic.co/guide/en/elasticsearch/reference/6.2/modules-node.html> #
discovery.zen.ping.unicast.hosts: {{ zen_nodes | to_json }} # Bootstrap the cluster using an initial set of master-eligible nodes:
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): #
discovery.zen.minimum_master_nodes: {{ elasticsearch_master_node_count | default(((master_node_count | int) // 2) + 1) }} cluster.initial_master_nodes: {{ master_nodes | to_json }}
#
# For more information, consult the discovery and cluster formation module documentation.
#
# The first set of nodes in the master_node_count are marked as such # The first set of nodes in the master_node_count are marked as such
#
node.master: {{ elasticsearch_node_master | default(master_node) }} node.master: {{ elasticsearch_node_master | default(master_node) }}
# Every node in the master list and every other node after will be a data node # Every node in the master list and every other node after will be a data node
#
node.data: {{ elasticsearch_node_data | default(data_node) }} node.data: {{ elasticsearch_node_data | default(data_node) }}
#
# Ingest nodes can execute pre-processing pipelines. To override automatic # Ingest nodes can execute pre-processing pipelines. To override automatic
# determination, the option `elasticsearch_node_ingest` can be defined as a # determination, the option `elasticsearch_node_ingest` can be defined as a
# Boolean which will enable or disable ingest nodes. When using automatic # Boolean which will enable or disable ingest nodes. When using automatic
@ -76,16 +104,14 @@ node.data: {{ elasticsearch_node_data | default(data_node) }}
# #
# NOTE(cloudnull): The use of "search remote connect" will follow the enablement # NOTE(cloudnull): The use of "search remote connect" will follow the enablement
# of an ingest nodes. # of an ingest nodes.
#
{% if elasticsearch_node_ingest is defined %} {% if elasticsearch_node_ingest is defined %}
node.ingest: {{ elasticsearch_node_ingest }} node.ingest: {{ elasticsearch_node_ingest }}
search.remote.connect: {{ elasticsearch_node_ingest }} cluster.remote.connect: {{ elasticsearch_node_ingest }}
{% else %} {% else %}
node.ingest: {{ data_node }} node.ingest: {{ data_node }}
search.remote.connect: {{ data_node }} cluster.remote.connect: {{ data_node }}
{% endif %} {% endif %}
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
# #
# ---------------------------------- Gateway ----------------------------------- # ---------------------------------- Gateway -----------------------------------
# #
@ -93,15 +119,10 @@ search.remote.connect: {{ data_node }}
# #
gateway.recover_after_nodes: {{ elasticsearch_master_node_count | default(((master_node_count | int) // 2) + 1) }} gateway.recover_after_nodes: {{ elasticsearch_master_node_count | default(((master_node_count | int) // 2) + 1) }}
# #
# For more information, see the documentation at: # For more information, consult the gateway module documentation.
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
# #
# ---------------------------------- Various ----------------------------------- # ---------------------------------- Various -----------------------------------
# #
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices: # Require explicit names when deleting indices:
# #
action.destructive_requires_name: true action.destructive_requires_name: true
@ -111,8 +132,6 @@ action.destructive_requires_name: true
# Thread pool settings. For more on this see the documentation at: # Thread pool settings. For more on this see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html> # <https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html>
thread_pool: thread_pool:
index:
queue_size: {{ (processors | int) * 256 }}
get: get:
queue_size: {{ (processors | int) * 256 }} queue_size: {{ (processors | int) * 256 }}
write: write:
@ -139,8 +158,9 @@ indices.recovery.max_bytes_per_sec: {{ elasticserch_interface_speed }}mb
# ---------------------------------- X-Pack ------------------------------------ # ---------------------------------- X-Pack ------------------------------------
# X-Pack Monitoring # X-Pack Monitoring
# https://www.elastic.co/guide/en/elasticsearch/reference/6.3/monitoring-settings.html #
xpack.monitoring.collection.enabled: true xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 30s xpack.monitoring.collection.interval: 30s
# Set to true to enable machine learning on the node. # Set to true to enable machine learning on the node.
xpack.ml.enabled: false xpack.ml.enabled: false

View File

@ -13,6 +13,4 @@
- import_playbook: installElastic.yml - import_playbook: installElastic.yml
- import_playbook: installLogstash.yml - import_playbook: installLogstash.yml
- import_playbook: installCurator.yml
- import_playbook: installKibana.yml - import_playbook: installKibana.yml
- import_playbook: installAPMserver.yml

View File

@ -107,6 +107,9 @@ output.logstash:
# Set gzip compression level. # Set gzip compression level.
compression_level: 3 compression_level: 3
# Configure escaping HTML symbols in strings.
#escape_html: false
# Optional maximum time to live for a connection to Logstash, after which the # Optional maximum time to live for a connection to Logstash, after which the
# connection will be re-established. A value of `0s` (the default) will # connection will be re-established. A value of `0s` (the default) will
# disable this feature. # disable this feature.
@ -114,10 +117,10 @@ output.logstash:
# Not yet supported for async connections (i.e. with the "pipelining" option set) # Not yet supported for async connections (i.e. with the "pipelining" option set)
#ttl: 30s #ttl: 30s
# Optional load balance the events between the Logstash hosts. Default is false. # Optionally load-balance events between Logstash hosts. Default is false.
loadbalance: true loadbalance: true
# Number of batches to be sent asynchronously to logstash while processing # Number of batches to be sent asynchronously to Logstash while processing
# new batches. # new batches.
pipelining: 2 pipelining: 2
@ -126,33 +129,30 @@ output.logstash:
# if no error is encountered. # if no error is encountered.
slow_start: true slow_start: true
# The maximum number of events to bulk in a single Logstash request. The # The number of seconds to wait before trying to reconnect to Logstash
# default is the number of cores multiplied by the number of threads, # after a network error. After waiting backoff.init seconds, the Beat
# the resultant is then multiplied again by 128 which results in a the defined # tries to reconnect. If the attempt fails, the backoff timer is increased
# bulk max size. If the Beat sends single events, the events are collected # exponentially up to backoff.max. After a successful connection, the backoff
# into batches. If the Beat publishes a large batch of events (larger than # timer is reset. The default is 1s.
# the value specified by bulk_max_size), the batch is split. Specifying a #backoff.init: 1s
# larger batch size can improve performance by lowering the overhead of
# sending events. However big batch sizes can also increase processing times,
# which might result in API errors, killed connections, timed-out publishing
# requests, and, ultimately, lower throughput. Setting bulk_max_size to values
# less than or equal to 0 disables the splitting of batches. When splitting
# is disabled, the queue decides on the number of events to be contained in a
# batch.
bulk_max_size: {{ (processors | int) * 128 }}
{% if named_index is defined %} # The maximum number of seconds to wait before attempting to connect to
# Optional index name. The default index name is set to {{ named_index }} # Logstash after a network error. The default is 60s.
#backoff.max: 60s
# Optional index name. The default index name is set to journalbeat
# in all lowercase. # in all lowercase.
{% if named_index is defined %}
index: '{{ named_index }}' index: '{{ named_index }}'
{% endif %} {% endif %}
# SOCKS5 proxy server URL # SOCKS5 proxy server URL
#proxy_url: socks5://user:password@socks5-server:2233 #proxy_url: socks5://user:password@socks5-server:2233
# Resolve names locally when using a proxy server. Defaults to false. # Resolve names locally when using a proxy server. Defaults to false.
#proxy_use_local_resolver: false #proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set. # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
#ssl.enabled: true #ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts # Configure SSL verification mode. If `none` is configured, all server hosts
@ -161,7 +161,7 @@ output.logstash:
# `full`. # `full`.
#ssl.verification_mode: full #ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
# 1.2 are enabled. # 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
@ -172,7 +172,7 @@ output.logstash:
# Certificate for SSL client authentication # Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem" #ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key # Client certificate key
#ssl.key: "/etc/pki/client/cert.key" #ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key. # Optional passphrase for decrypting the Certificate Key.
@ -181,12 +181,27 @@ output.logstash:
# Configure cipher suites to be used for SSL connections # Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: [] #ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites # Configure curve types for ECDHE-based cipher suites
#ssl.curve_types: [] #ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never. # never, once, and freely. Default is never.
#ssl.renegotiation: never #ssl.renegotiation: never
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting
# and retry until all events are published. Set max_retries to a value less
# than 0 to retry until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Logstash request. The
# default is 2048.
bulk_max_size: {{ (processors | int) * 128 }}
# The number of seconds to wait for responses from the Logstash server before
# timing out. The default is 30s.
#timeout: 30s
{%- endmacro %} {%- endmacro %}
{% macro setup_dashboards(beat_name) -%} {% macro setup_dashboards(beat_name) -%}
@ -254,10 +269,19 @@ setup.template.pattern: "{{ beat_name }}-%{[beat.version]}-*"
# Path to fields.yml file to generate the template # Path to fields.yml file to generate the template
setup.template.fields: "${path.config}/fields.yml" setup.template.fields: "${path.config}/fields.yml"
# Enable JSON template loading. If this is enabled, the fields.yml is ignored.
#setup.template.json.enabled: false
# Path to the JSON template file
#setup.template.json.path: "${path.config}/template.json"
# Name under which the template is stored in Elasticsearch
#setup.template.json.name: ""
# Overwrite existing template # Overwrite existing template
setup.template.overwrite: {{ host == data_nodes[0] }} setup.template.overwrite: {{ host == data_nodes[0] }}
{% set shards = ((data_nodes | length) * 3) | int %} {% set shards = 1 %}
# Elasticsearch template settings # Elasticsearch template settings
setup.template.settings: setup.template.settings:
@ -443,6 +467,17 @@ xpack.monitoring.elasticsearch:
# The default is 50. # The default is 50.
bulk_max_size: {{ (processors | int) * 64 }} bulk_max_size: {{ (processors | int) * 64 }}
# The number of seconds to wait before trying to reconnect to Elasticsearch
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Elasticsearch after a network error. The default is 60s.
#backoff.max: 60s
# Configure http request timeout before failing an request to Elasticsearch. # Configure http request timeout before failing an request to Elasticsearch.
timeout: 120 timeout: 120
@ -481,4 +516,7 @@ xpack.monitoring.elasticsearch:
# Configure what types of renegotiation are supported. Valid options are # Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never. # never, once, and freely. Default is never.
#ssl.renegotiation: never #ssl.renegotiation: never
#metrics.period: 10s
#state.period: 1m
{%- endmacro %} {%- endmacro %}

View File

@ -26,16 +26,9 @@
################################################################ ################################################################
## GC Configuration ## GC Configuration
{% if ((heap_size | int) > 6144) and (elastic_g1gc_enabled | bool) %}
-XX:+UseG1GC -XX:+UseG1GC
-XX:MaxGCPauseMillis=400 -XX:MaxGCPauseMillis=400
-XX:InitiatingHeapOccupancyPercent=75 -XX:InitiatingHeapOccupancyPercent=75
{% else %}
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
{% endif %}
## optimizations ## optimizations

View File

@ -2,7 +2,7 @@
# For more information on multiple pipelines, see the documentation: # For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html # https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: "elk_metrics_6x" - pipeline.id: "elk_metrics_7x"
queue.type: "persisted" queue.type: "persisted"
config.string: | config.string: |
input { input {
@ -498,7 +498,7 @@
hosts => ["{{ '127.0.0.1:' ~ elastic_port }}"] hosts => ["{{ '127.0.0.1:' ~ elastic_port }}"]
sniffing => {{ (elastic_sniffing_enabled | default(not data_node)) | bool | string | lower }} sniffing => {{ (elastic_sniffing_enabled | default(not data_node)) | bool | string | lower }}
manage_template => {{ (data_node | bool) | lower }} manage_template => {{ (data_node | bool) | lower }}
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" index => "%{[@metadata][beat]}-%{[@metadata][version]}"
} }
} else if [@metadata][beat] { } else if [@metadata][beat] {
elasticsearch { elasticsearch {
@ -544,7 +544,7 @@
hosts => ["{{ '127.0.0.1:' ~ elastic_port }}"] hosts => ["{{ '127.0.0.1:' ~ elastic_port }}"]
sniffing => {{ (elastic_sniffing_enabled | default(not data_node)) | bool | string | lower }} sniffing => {{ (elastic_sniffing_enabled | default(not data_node)) | bool | string | lower }}
manage_template => {{ (data_node | bool) | lower }} manage_template => {{ (data_node | bool) | lower }}
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" index => "%{[@metadata][beat]}-%{[@metadata][version]}"
} }
} else if [@metadata][beat] { } else if [@metadata][beat] {
elasticsearch { elasticsearch {

View File

@ -1,33 +1,33 @@
--- ---
- name: apt_package_pinning - name: apt_package_pinning
scm: git scm: git
src: https://git.openstack.org/openstack/openstack-ansible-apt_package_pinning src: https://opendev.org/openstack/openstack-ansible-apt_package_pinning
version: master version: master
- name: config_template - name: config_template
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-config_template src: https://opendev.org/openstack/ansible-config_template
version: master version: master
- name: nspawn_container_create - name: nspawn_container_create
scm: git scm: git
src: https://git.openstack.org/openstack/openstack-ansible-nspawn_container_create src: https://opendev.org/openstack/openstack-ansible-nspawn_container_create
version: master version: master
- name: nspawn_hosts - name: nspawn_hosts
scm: git scm: git
src: https://git.openstack.org/openstack/openstack-ansible-nspawn_hosts src: https://opendev.org/openstack/openstack-ansible-nspawn_hosts
version: master version: master
- name: plugins - name: plugins
scm: git scm: git
src: https://git.openstack.org/openstack/openstack-ansible-plugins src: https://opendev.org/openstack/openstack-ansible-plugins
version: master version: master
- name: systemd_mount - name: systemd_mount
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_mount src: https://opendev.org/openstack/ansible-role-systemd_mount
version: master version: master
- name: systemd_networkd - name: systemd_networkd
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_networkd src: https://opendev.org/openstack/ansible-role-systemd_networkd
version: master version: master
- name: systemd_service - name: systemd_service
scm: git scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_service src: https://opendev.org/openstack/ansible-role-systemd_service
version: master version: master

View File

@ -25,7 +25,7 @@
ZUUL_PROJECT: "{{ zuul.project.short_name }}" ZUUL_PROJECT: "{{ zuul.project.short_name }}"
ANSIBLE_PACKAGE: "{{ ansible_package | default('') }}" ANSIBLE_PACKAGE: "{{ ansible_package | default('') }}"
ANSIBLE_HOST_KEY_CHECKING: "False" ANSIBLE_HOST_KEY_CHECKING: "False"
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
ANSIBLE_ACTION_PLUGINS: "${HOME}/ansible_venv/repositories/roles/config_template/action" ANSIBLE_ACTION_PLUGINS: "${HOME}/ansible_venv/repositories/roles/config_template/action"
ANSIBLE_CONNECTION_PLUGINS: "${HOME}/ansible_venv/repositories/roles/plugins/connection" ANSIBLE_CONNECTION_PLUGINS: "${HOME}/ansible_venv/repositories/roles/plugins/connection"
ANSIBLE_ROLES_PATH: "${HOME}/ansible_venv/repositories/roles" ANSIBLE_ROLES_PATH: "${HOME}/ansible_venv/repositories/roles"
@ -63,15 +63,15 @@
reload: "yes" reload: "yes"
sysctl_file: /etc/sysctl.d/99-elasticsearch.conf sysctl_file: /etc/sysctl.d/99-elasticsearch.conf
- name: Create tmp elk_metrics_6x dir - name: Create tmp elk_metrics_7x dir
file: file:
path: "/tmp/elk-metrics-6x-logs" path: "/tmp/elk-metrics-7x-logs"
state: directory state: directory
- name: Flush iptables rules - name: Flush iptables rules
command: "{{ item }}" command: "{{ item }}"
args: args:
creates: "/tmp/elk-metrics-6x-logs/iptables.flushed" creates: "/tmp/elk-metrics-7x-logs/iptables.flushed"
with_items: with_items:
- "iptables -F" - "iptables -F"
- "iptables -X" - "iptables -X"
@ -82,7 +82,7 @@
- "iptables -P INPUT ACCEPT" - "iptables -P INPUT ACCEPT"
- "iptables -P FORWARD ACCEPT" - "iptables -P FORWARD ACCEPT"
- "iptables -P OUTPUT ACCEPT" - "iptables -P OUTPUT ACCEPT"
- "touch /tmp/elk-metrics-6x-logs/iptables.flushed" - "touch /tmp/elk-metrics-7x-logs/iptables.flushed"
- name: First ensure apt cache is always refreshed - name: First ensure apt cache is always refreshed
apt: apt:
@ -96,30 +96,30 @@
become_user: root become_user: root
command: "./bootstrap-embedded-ansible.sh" command: "./bootstrap-embedded-ansible.sh"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x" chdir: "src/{{ current_test_repo }}/elk_metrics_7x"
- name: Run ansible-galaxy (tests) - name: Run ansible-galaxy (tests)
become: yes become: yes
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-galaxy install --force --ignore-errors --roles-path=${HOME}/ansible_venv/repositories/roles -r ansible-role-requirements.yml" command: "${HOME}/ansible_venv/bin/ansible-galaxy install --force --ignore-errors --roles-path=${HOME}/ansible_venv/repositories/roles -r ansible-role-requirements.yml"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x/tests" chdir: "src/{{ current_test_repo }}/elk_metrics_7x/tests"
- name: Run ansible-galaxy (elk_metrics_6x) - name: Run ansible-galaxy (elk_metrics_7x)
become: yes become: yes
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-galaxy install --force --ignore-errors --roles-path=${HOME}/ansible_venv/repositories/roles -r ansible-role-requirements.yml" command: "${HOME}/ansible_venv/bin/ansible-galaxy install --force --ignore-errors --roles-path=${HOME}/ansible_venv/repositories/roles -r ansible-role-requirements.yml"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x" chdir: "src/{{ current_test_repo }}/elk_metrics_7x"
- name: Run environment setup - name: Run environment setup
become: yes become: yes
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-playbook -i {{ inventory_file }} -e @test-vars.yml _key-setup.yml" command: "${HOME}/ansible_venv/bin/ansible-playbook -i {{ inventory_file }} -e @test-vars.yml _key-setup.yml"
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test-container-setup.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test-container-setup.log"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x/tests" chdir: "src/{{ current_test_repo }}/elk_metrics_7x/tests"
when: when:
- ansible_service_mgr != 'systemd' or - ansible_service_mgr != 'systemd' or
not (container_inventory | bool) not (container_inventory | bool)
@ -129,9 +129,9 @@
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-playbook -i {{ inventory_file }} -e @test-vars.yml _container-setup.yml" command: "${HOME}/ansible_venv/bin/ansible-playbook -i {{ inventory_file }} -e @test-vars.yml _container-setup.yml"
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test-container-setup.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test-container-setup.log"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x/tests" chdir: "src/{{ current_test_repo }}/elk_metrics_7x/tests"
when: when:
- ansible_service_mgr == 'systemd' - ansible_service_mgr == 'systemd'
- container_inventory | bool - container_inventory | bool
@ -147,15 +147,15 @@
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-playbook -i tests/{{ inventory_file }} -e @tests/test-vars.yml site.yml" command: "${HOME}/ansible_venv/bin/ansible-playbook -i tests/{{ inventory_file }} -e @tests/test-vars.yml site.yml"
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test-deployment.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test-deployment.log"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x" chdir: "src/{{ current_test_repo }}/elk_metrics_7x"
- name: Show cluster state - name: Show cluster state
become: yes become: yes
become_user: root become_user: root
command: "${HOME}/ansible_venv/bin/ansible-playbook -i tests/{{ inventory_file }} -e @tests/test-vars.yml showElasticCluster.yml" command: "${HOME}/ansible_venv/bin/ansible-playbook -i tests/{{ inventory_file }} -e @tests/test-vars.yml showElasticCluster.yml"
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test-show-cluster.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test-show-cluster.log"
args: args:
chdir: "src/{{ current_test_repo }}/elk_metrics_6x" chdir: "src/{{ current_test_repo }}/elk_metrics_7x"

View File

@ -2,11 +2,11 @@ export ANSIBLE_HOST_KEY_CHECKING="False"
export ANSIBLE_ROLES_PATH="${HOME}/ansible_venv/repositories/roles" export ANSIBLE_ROLES_PATH="${HOME}/ansible_venv/repositories/roles"
export ANSIBLE_ACTION_PLUGINS="${HOME}/ansible_venv/repositories/roles/config_template/action" export ANSIBLE_ACTION_PLUGINS="${HOME}/ansible_venv/repositories/roles/config_template/action"
export ANSIBLE_CONNECTION_PLUGINS="${HOME}/ansible_venv/repositories/roles/plugins/connection" export ANSIBLE_CONNECTION_PLUGINS="${HOME}/ansible_venv/repositories/roles/plugins/connection"
export ANSIBLE_LOG_PATH="/tmp/elk-metrics-6x-logs/ansible-elk-test.log" export ANSIBLE_LOG_PATH="/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
if [[ ! -d "/tmp/elk-metrics-6x-logs" ]]; then if [[ ! -d "/tmp/elk-metrics-7x-logs" ]]; then
mkdir -pv "/tmp/elk-metrics-6x-logs" mkdir -pv "/tmp/elk-metrics-7x-logs"
chmod 0777 "/tmp/elk-metrics-6x-logs" chmod 0777 "/tmp/elk-metrics-7x-logs"
fi fi
echo "To build a test environment run the following:" echo "To build a test environment run the following:"

View File

@ -20,7 +20,7 @@
tasks: tasks:
- name: Copy logs back to the executor - name: Copy logs back to the executor
synchronize: synchronize:
src: "/tmp/elk-metrics-6x-logs" src: "/tmp/elk-metrics-7x-logs"
dest: "{{ zuul.executor.log_root }}/" dest: "{{ zuul.executor.log_root }}/"
mode: pull mode: pull
rsync_opts: rsync_opts:

View File

@ -18,7 +18,7 @@ set -e
export TEST_DIR="$(readlink -f $(dirname ${0})/../../)" export TEST_DIR="$(readlink -f $(dirname ${0})/../../)"
# Stop beat processes # Stop beat processes
pushd "${TEST_DIR}/elk_metrics_6x" pushd "${TEST_DIR}/elk_metrics_7x"
for i in $(ls -1 install*beat.yml); do for i in $(ls -1 install*beat.yml); do
LOWER_BEAT="$(echo "${i}" | tr '[:upper:]' '[:lower:]')" LOWER_BEAT="$(echo "${i}" | tr '[:upper:]' '[:lower:]')"
BEAT_PARTIAL="$(echo ${LOWER_BEAT} | awk -F'.' '{print $1}')" BEAT_PARTIAL="$(echo ${LOWER_BEAT} | awk -F'.' '{print $1}')"

View File

@ -32,7 +32,7 @@
- name: Set current test repo (cross-repo) - name: Set current test repo (cross-repo)
set_fact: set_fact:
current_test_repo: "git.openstack.org/{{ osa_test_repo }}" current_test_repo: "opendev.org/{{ osa_test_repo }}"
when: when:
- osa_test_repo is defined - osa_test_repo is defined
@ -49,5 +49,5 @@
post_tasks: post_tasks:
- name: Ensure the log directory exists - name: Ensure the log directory exists
file: file:
path: "/tmp/elk-metrics-6x-logs" path: "/tmp/elk-metrics-7x-logs"
state: directory state: directory

View File

@ -26,21 +26,21 @@ pushd "${HOME}"
popd popd
popd popd
source "${TEST_DIR}/elk_metrics_6x/tests/manual-test.rc" source "${TEST_DIR}/elk_metrics_7x/tests/manual-test.rc"
source "${TEST_DIR}/elk_metrics_6x/bootstrap-embedded-ansible.sh" source "${TEST_DIR}/elk_metrics_7x/bootstrap-embedded-ansible.sh"
deactivate deactivate
${HOME}/ansible_venv/bin/ansible-galaxy install --force \ ${HOME}/ansible_venv/bin/ansible-galaxy install --force \
--roles-path="${HOME}/ansible_venv/repositories/roles" \ --roles-path="${HOME}/ansible_venv/repositories/roles" \
--role-file="${TEST_DIR}/elk_metrics_6x/tests/ansible-role-requirements.yml" --role-file="${TEST_DIR}/elk_metrics_7x/tests/ansible-role-requirements.yml"
if [[ ! -e "${TEST_DIR}/elk_metrics_6x/tests/src" ]]; then if [[ ! -e "${TEST_DIR}/elk_metrics_7x/tests/src" ]]; then
ln -s ${TEST_DIR}/../ ${TEST_DIR}/elk_metrics_6x/tests/src ln -s ${TEST_DIR}/../ ${TEST_DIR}/elk_metrics_7x/tests/src
fi fi
${HOME}/ansible_venv/bin/ansible-playbook -i 'localhost,' \ ${HOME}/ansible_venv/bin/ansible-playbook -i 'localhost,' \
-vv \ -vv \
-e ansible_connection=local \ -e ansible_connection=local \
-e test_clustered_elk=${CLUSTERED:-no} \ -e test_clustered_elk=${CLUSTERED:-no} \
${TEST_DIR}/elk_metrics_6x/tests/test.yml ${TEST_DIR}/elk_metrics_7x/tests/test.yml

View File

@ -17,7 +17,7 @@
become: true become: true
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
tasks: tasks:
- name: Check for open TCP - name: Check for open TCP
@ -36,7 +36,7 @@
become: true become: true
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
tasks: tasks:
- name: Check http - name: Check http
@ -69,7 +69,7 @@
become: true become: true
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
tasks: tasks:
- name: Check http - name: Check http
@ -96,7 +96,7 @@
become: true become: true
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
tasks: tasks:
- name: Check http - name: Check http

View File

@ -18,7 +18,7 @@
become: true become: true
environment: environment:
ANSIBLE_LOG_PATH: "/tmp/elk-metrics-6x-logs/ansible-elk-test.log" ANSIBLE_LOG_PATH: "/tmp/elk-metrics-7x-logs/ansible-elk-test.log"
vars: vars:
storage_node_count: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] storage_node_count: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]

View File

@ -376,3 +376,38 @@ grafana_datasources:
maxConcurrentShardRequests: 256 maxConcurrentShardRequests: 256
timeField: "@timestamp" timeField: "@timestamp"
timeInterval: ">60s" timeInterval: ">60s"
elastic_beats:
logstash:
make_index: true
hosts: "{{ groups['elastic-logstash'] | default([]) }}"
apm:
make_index: true
timeFieldName: '@timestamp'
hosts: "{{ groups['apm-server'] | default([]) }}"
auditbeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['hosts'] | default([]) }}"
filebeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['hosts'] | default([]) }}"
syslog:
make_index: true
hosts: "{{ groups['hosts'] | default([]) }}"
heartbeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['kibana'][:3] | default([]) }}"
journalbeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['hosts'] | default([]) }}"
metricbeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['all'] | default([]) }}"
packetbeat:
timeFieldName: '@timestamp'
hosts: "{{ groups['hosts'] | default([]) }}"
monitorstack:
timeFieldName: '@timestamp'
hosts: "{{ (groups['nova_compute'] | default([])) | union((groups['utility_all'] | default([]))) | union((groups['memcached_all'] | default([]))) }}"
skydive:
hosts: "{{ (((groups['skydive_analyzers'] | default([])) | length) > 0) | ternary((groups['hosts'] | default([])), []) }}"

View File

@ -26,6 +26,20 @@
osa_test_repo: "openstack/openstack-ansible-ops" osa_test_repo: "openstack/openstack-ansible-ops"
test_clustered_elk: false test_clustered_elk: false
- job:
name: "openstack-ansible-ops:elk_metrics_7x-ubuntu-bionic"
parent: base
nodeset: ubuntu-bionic
description: "Runs a gate test on the elk_metrics_7x project."
run: "elk_metrics_7x/tests/test.yml"
post-run: "elk_metrics_7x/tests/post-run.yml"
files:
- ^elk_metrics_7x/.*
- ^bootstrap-embedded-ansible/.*
vars:
osa_test_repo: "openstack/openstack-ansible-ops"
test_clustered_elk: false
- job: - job:
name: "openstack-ansible-ops:elk_metrics_6x-centos-7" name: "openstack-ansible-ops:elk_metrics_6x-centos-7"
parent: "openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial" parent: "openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial"

View File

@ -26,6 +26,7 @@
- openstack-ansible-ops:elk_metrics_6x-ubuntu-trusty - openstack-ansible-ops:elk_metrics_6x-ubuntu-trusty
- openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial - openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial
- openstack-ansible-ops:elk_metrics_6x-ubuntu-bionic - openstack-ansible-ops:elk_metrics_6x-ubuntu-bionic
- openstack-ansible-ops:elk_metrics_7x-ubuntu-bionic
# - openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial-clustered # - openstack-ansible-ops:elk_metrics_6x-ubuntu-xenial-clustered
# - openstack-ansible-ops:elk_metrics_6x-ubuntu-bionic-clustered # - openstack-ansible-ops:elk_metrics_6x-ubuntu-bionic-clustered
- openstack-ansible-ops:osquery-ubuntu-xenial - openstack-ansible-ops:osquery-ubuntu-xenial