5c10c12a37
Heartbeat has been enabled which will allow us to check the uptime of hosts and services. Stack updates have been made to correct a couple templating issues and reduce template sizes by using includes where we have common config. Change-Id: I47e32ac4b4ce8ca3ea572d8384660011af7cde6a Signed-off-by: Kevin Carter <kevin.carter@rackspace.com> |
||
---|---|---|
.. | ||
conf.d | ||
env.d | ||
templates | ||
vars | ||
common_task_install_elk_repo.yml | ||
installAPMserver.yml | ||
installAuditbeat.yml | ||
installElastic.yml | ||
installFilebeat.yml | ||
installHeartbeat.yml | ||
installKibana.yml | ||
installLogstash.yml | ||
installMetricbeat.yml | ||
installPacketbeat.yml | ||
inventory.example.yml | ||
readme.rst | ||
site.yml |
Install ELK with beats to gather metrics
- tags
-
openstack, ansible
About this repository
This set of playbooks will deploy elk cluster (Elasticsearch, Logstash, Kibana) with topbeat to gather metrics from hosts metrics to the ELK cluster.
These playbooks require Ansible 2.4+.
OpenStack-Ansible Integration
These playbooks can be used as standalone inventory or as an
integrated part of an OpenStack-Ansible deployment. For a simple example
of standalone inventory, see inventory.example.yml
.
Optional | Load balancer VIP address
In order to use multi-node elasticsearch a loadbalancer is required. Haproxy can provide the load balancer functionality needed. The option internal_lb_vip_address is used as the endpoint (virtual IP address) services like Kibana will use when connecting to elasticsearch. If this option is omitted, the first node in the elasticsearch cluster will be used.
Optional | configure haproxy endpoints
Edit the /etc/openstack_deploy/user_variables.yml file and add fiel following lines
haproxy_extra_services:
- service:
haproxy_service_name: kibana
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['kibana'] | default([]) }}"
haproxy_port: 81 # This is set using the "kibana_nginx_port" variable
haproxy_balance_type: tcp
- service:
haproxy_service_name: elastic-logstash
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
haproxy_port: 5044 # This is set using the "logstash_beat_input_port" variable
haproxy_balance_type: tcp
- service:
haproxy_service_name: elastic-logstash
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
haproxy_port: 9201 # This is set using the "elastic_hap_port" variable
haproxy_check_port: 9200 # This is set using the "elastic_port" variable
haproxy_backend_port: 9200 # This is set using the "elastic_port" variable
haproxy_balance_type: tcp
Optional | run the haproxy-install playbook
cd /opt/openstack-ansible/playbooks/
openstack-ansible haproxy-install.yml --tags=haproxy-service-config
Deployment Process
Clone the elk-osa repo
cd /opt
git clone https://github.com/openstack/openstack-ansible-ops
Copy the env.d file into place
cd /opt/openstack-ansible-ops/elk_metrics_6x
cp env.d/elk.yml /etc/openstack_deploy/env.d/
Copy the conf.d file into place
cp conf.d/elk.yml /etc/openstack_deploy/conf.d/
In elk.yml, list your logging hosts under elastic-logstash_hosts to create the elasticsearch cluster in multiple containers and one logging host under kibana_hosts to create the kibana container
vi /etc/openstack_deploy/conf.d/elk.yml
Create the containers
cd /opt/openstack-ansible-playbooks
openstack-ansible lxc-containers-create.yml -e 'container_group=elastic-logstash:kibana'
install master/data elasticsearch nodes on the elastic-logstash containers
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installElastic.yml
Install Logstash on all the elastic containers
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installLogstash.yml
Install Kibana, nginx reverse proxy and metricbeat on the kibana container
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installKibana.yml
Install Metricbeat everywhere to start shipping metrics to our logstash instances
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installMetricbeat.yml
Adding Grafana visualizations
See the grafana directory for more information on how to deploy
grafana. Once When deploying grafana, source the variable file from ELK
in order to automatically connect grafana to the Elasticsearch datastore
and import dashboards. Including the variable file is as simple as
adding -e @../elk_metrics_6x/vars/variables.yml
to the
grafana playbook run.
Included dashboards
Trouble shooting
If everything goes bad, you can clean up with the following command
openstack-ansible lxc-containers-destroy.yml --limit=kibana:elastic-logstash_all