openstack-ansible-ops/elk_metrics_6x
Kevin Carter 390314e18b
Add variables to connect ELK and Grafana
With the option to deploy grafana the following changes allow a user to
automatically connect ELK and Grafana.

Change-Id: Ic8e64a31d860940c6863f46ce558908d5ef8f8e7
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
2018-04-13 23:08:31 -05:00
..
conf.d Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00
env.d Add grafana 2018-04-13 10:31:34 -05:00
templates Add variables to connect ELK and Grafana 2018-04-13 23:08:31 -05:00
vars Add variables to connect ELK and Grafana 2018-04-13 23:08:31 -05:00
common_task_install_elk_repo.yml Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00
installAuditbeat.yml Update README, beat deployment, and configs 2018-04-12 02:59:55 -05:00
installElastic.yml Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00
installFilebeat.yml Add filebeat support 2018-04-12 10:41:18 -05:00
installKibana.yml Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00
installLogstash.yml Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00
installMetricbeat.yml Update README, beat deployment, and configs 2018-04-12 02:59:55 -05:00
installPacketbeat.yml Update README, beat deployment, and configs 2018-04-12 02:59:55 -05:00
inventory.example.yml Update docs and add inventory example 2018-04-13 10:24:15 -05:00
readme.rst Add variables to connect ELK and Grafana 2018-04-13 23:08:31 -05:00
site.yml Update elk 6.x playbooks 2018-04-11 03:11:44 -05:00

Install ELK with beats to gather metrics

tags

openstack, ansible

About this repository

This set of playbooks will deploy elk cluster (Elasticsearch, Logstash, Kibana) with topbeat to gather metrics from hosts metrics to the ELK cluster.

These playbooks require Ansible 2.4+.

OpenStack-Ansible Integration

These playbooks can be used as standalone inventory or as an integrated part of an OpenStack-Ansible deployment. For a simple example of standalone inventory, see inventory.example.yml.

Optional | Load balancer VIP address

In order to use multi-node elasticsearch a loadbalancer is required. Haproxy can provide the load balancer functionality needed. The option internal_lb_vip_address is used as the endpoint (virtual IP address) services like Kibana will use when connecting to elasticsearch. If this option is omitted, the first node in the elasticsearch cluster will be used.

Optional | configure haproxy endpoints

Edit the /etc/openstack_deploy/user_variables.yml file and add fiel following lines

haproxy_extra_services:
 - service:
      haproxy_service_name: kibana
      haproxy_ssl: False
      haproxy_backend_nodes: "{{ groups['kibana'] | default([]) }}"
      haproxy_port: 81  # This is set using the "kibana_nginx_port" variable
      haproxy_balance_type: tcp
  - service:
      haproxy_service_name: elastic-logstash
      haproxy_ssl: False
      haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
      haproxy_port: 5044  # This is set using the "logstash_beat_input_port" variable
      haproxy_balance_type: tcp
  - service:
      haproxy_service_name: elastic-logstash
      haproxy_ssl: False
      haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
      haproxy_port: 9201  # This is set using the "elastic_hap_port" variable
      haproxy_check_port: 9200  # This is set using the "elastic_port" variable
      haproxy_backend_port: 9200  # This is set using the "elastic_port" variable
      haproxy_balance_type: tcp

Optional | run the haproxy-install playbook

cd /opt/openstack-ansible/playbooks/
openstack-ansible haproxy-install.yml --tags=haproxy-service-config

Deployment Process

Clone the elk-osa repo

cd /opt
git clone https://github.com/openstack/openstack-ansible-ops

Copy the env.d file into place

cd /opt/openstack-ansible-ops/elk_metrics_6x
cp env.d/elk.yml /etc/openstack_deploy/env.d/

Copy the conf.d file into place

cp conf.d/elk.yml /etc/openstack_deploy/conf.d/

In elk.yml, list your logging hosts under elastic-logstash_hosts to create the elasticsearch cluster in multiple containers and one logging host under kibana_hosts to create the kibana container

vi /etc/openstack_deploy/conf.d/elk.yml

Create the containers

cd /opt/openstack-ansible-playbooks
openstack-ansible lxc-containers-create.yml -e 'container_group=elastic-logstash:kibana'

install master/data elasticsearch nodes on the elastic-logstash containers

cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installElastic.yml

Install Logstash on all the elastic containers

cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installLogstash.yml

Install Kibana, nginx reverse proxy and metricbeat on the kibana container

cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installKibana.yml

Install Metricbeat everywhere to start shipping metrics to our logstash instances

cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installMetricbeat.yml

Adding Grafana visualizations

See the grafana directory for more information on how to deploy grafana. Once When deploying grafana, source the variable file from ELK in order to automatically connect grafana to the Elasticsearch datastore and import dashboards. Including the variable file is as simple as adding -e @../elk_metrics_6x/vars/variables.yml to the grafana playbook run.

Included dashboards

Trouble shooting

If everything goes bad, you can clean up with the following command

openstack-ansible lxc-containers-destroy.yml --limit=kibana:elastic-logstash_all