EFK tutorial - EFK Quick Start Guide

Posted by McChicken on Sun, 27 Oct 2019 13:30:10 +0100

Through the deployment of elastic search (three nodes) + filebeat+kibana quick start EFK, and build the available demo environment test effect

Author: "the wolf of hair", welcome to reprint and contribute

Catalog

Application
Experiment architecture
▪ EFK software installation
▪ elasticsearch configuration
▪ filebeat configuration
▪ kibana configuration
▪ start up services
Configuration of kibana interface
Beta test
▪ follow up articles

purpose

▷ real time collection through filebeat nginx Access logs, transfer to elasticsearch cluster ▷ filebeat transfer collected logs to elasticsearch cluster ▷ display logs through kibana

Experimental framework

▷ server configuration

▷ frame composition

EFK software installation

Version specification

▷ elasticsearch 7.3.2
▷ filebeat 7.3.2
▷ kibana 7.3.2

Matters needing attention

▷ three component versions must be consistent
▷ elasticsearch must be more than 3 and the total quantity is odd

Installation path

▷ /opt/elasticsearch
▷ /opt/filebeat
▷ /opt/kibana

Elastic search installation: all 3 es perform the same installation steps

mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz
tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz
mv elasticsearch-7.3.2 /opt/elasticsearch
useradd elasticsearch -d /opt/elasticsearch -s /sbin/nologin
mkdir -p /opt/logs/elasticsearch
chown elasticsearch.elasticsearch /opt/elasticsearch -R
chown elasticsearch.elasticsearch /opt/logs/elasticsearch -R

# Limit the number of VMA (virtual memory area) that a process can own to more than 262144, otherwise elasticsearch will report Max virtual memory areas vm.max] is too low, increase to at least [262144]
echo "vm.max_map_count = 655350" >> /etc/sysctl.conf
sysctl -p

filebeat installation

mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-linux-x86_64.tar.gz
mkdir -p /opt/logs/filebeat/
tar -zxvf filebeat-7.3.2-linux-x86_64.tar.gz
mv filebeat-7.3.2-linux-x86_64 /opt/filebeat

kibana installation

mkdir -p /opt/software && cd /opt/software
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz
tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz
mv kibana-7.3.2-linux-x86_64 /opt/kibana
useradd kibana -d /opt/kibana -s /sbin/nologin
chown kibana.kibana /opt/kibana -R

nginx installation (used to generate logs, collected by filebeat)

# Only installed on 192.168.1.11
yum install -y nginx
/usr/sbin/nginx -c /etc/nginx/nginx.conf

elasticsearch configuration

▷ 192.168.1.31 /opt/elasticsearch/config/elasticsearch.yml

# Cluster name
cluster.name: my-application

# Node name
node.name: 192.168.1.31

# Log location
path.logs: /opt/logs/elasticsearch

# IP access of this node
network.host: 192.168.1.31

# Access to this node
http.port: 9200

# Node transport port
transport.port: 9300

# List of other hosts in the cluster
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# The collection of master nodes whose votes are counted in the first election when the new elastic search cluster is started for the first time
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# Enable cross domain resource sharing
http.cors.enabled: true
http.cors.allow-origin: "*"

# As long as two data or primary nodes have joined the cluster, they can be recovered.
gateway.recover_after_nodes: 2

▷ 192.168.1.32 /opt/elasticsearch/config/elasticsearch.yml

# Cluster name
cluster.name: my-application

# Node name
node.name: 192.168.1.32

# Log location
path.logs: /opt/logs/elasticsearch

# IP access of this node
network.host: 192.168.1.32

# Access to this node
http.port: 9200

# Node transport port
transport.port: 9300

# List of other hosts in the cluster
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# The collection of master nodes whose votes are counted in the first election when the new elastic search cluster is started for the first time
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# Enable cross domain resource sharing
http.cors.enabled: true
http.cors.allow-origin: "*"

# As long as two data or primary nodes have joined the cluster, they can be recovered.
gateway.recover_after_nodes: 2

▷ 192.168.1.33 /opt/elasticsearch/config/elasticsearch.yml

# Cluster name
cluster.name: my-application

# Node name
node.name: 192.168.1.33

# Log location
path.logs: /opt/logs/elasticsearch

# IP access of this node
network.host: 192.168.1.33

# Access to this node
http.port: 9200

# Node transport port
transport.port: 9300

# List of other hosts in the cluster
discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# The collection of master nodes whose votes are counted in the first election when the new elastic search cluster is started for the first time
cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"]

# Enable cross domain resource sharing
http.cors.enabled: true
http.cors.allow-origin: "*"

# As long as two data or primary nodes have joined the cluster, they can be recovered.
gateway.recover_after_nodes: 2

filebeat configuration

192.168.1.11 /opt/filebeat/filebeat.yml

# File input
filebeat.inputs:
  # File input type
  - type: log
    # Open loading
    enabled: true
    # file location
    paths:
      - /var/log/nginx/access.log
    # Custom parameters
    fields:
      type: nginx_access  # The type is nginx? Access, which is consistent with the fields.type above.

# Output to elasticsearch
output.elasticsearch:
  # elasticsearch cluster
  hosts: ["http://192.168.1.31:9200",
          "http://192.168.1.32:9200",
          "http://192.168.1.33:9200"]

  # Index configuration
  indices:
    # Index name
    - index: "nginx_access_%{+yyy.MM}"
      # Use this index when the type is nginx? Access
      when.equals:
        fields.type: "nginx_access"

# Close the built-in template
setup.template.enabled: false

# Turn on logging
logging.to_files: true
# Log level
logging.level: info
# log file
logging.files:
  # Log location
  path: /opt/logs/filebeat/
  # Log name
  name: filebeat
  # Log rotation period must be 2-1024
  keepfiles: 7
  # Log rotation permission
  permissions: 0600

kibana configuration

192.168.1.21 /opt/kibana/config/kibana.yml

# Access port of this node
server.port: 5601

# This node IP
server.host: "192.168.1.21"

# Name of this node
server.name: "192.168.1.21"

# elasticsearch cluster IP
elasticsearch.hosts: ["http://192.168.1.31:9200",
                      "http://192.168.1.32:9200",
                      "http://192.168.1.33:9200"]

Startup service

# elasticsearch start (all 3 es start)
sudo -u elasticsearch /opt/elasticsearch/bin/elasticsearch

# filebeat start
/opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml -d "publish"

# kibana boot
sudo -u kibana /opt/kibana/bin/kibana -c /opt/kibana/config/kibana.yml

The above startup method is run in the foreground. System D configuration method will be provided in the following articles of EFK tutorial series. Please pay attention!

kibana interface configuration

1. Use the browser to visit 192.168.1.21:5601 and see the following interface to indicate successful startup.

2. Click "Try our sample data"

3. "Help us improve the Elastic Stack by providing usage statistics for basic features. We will not share this data outside of Elastic" click "no"

4. "Add Data to kibana" and "add data"

5. Enter the view

test

Visit nginx to generate logs

curl -I "http://192.168.1.11"

View data on kibana

1. Create index template

2. Enter the name of the index template you want to create

3. View the previous data of CURL

Subsequent articles

This is the first article in the EFK tutorial series. Subsequent EFK articles will be released gradually, including role separation, performance optimization and many other dry goods. Please pay attention!

Topics: ElasticSearch Nginx Linux network