Kibana and Logstash installation configuration

Posted by shivani.shm on Tue, 25 Jan 2022 13:55:19 +0100

Elasticsearch, Kibana, Logstash version

  • Elasticsearch: 7.2.0
  • Kibana: 7.2.0
  • Logstash: 7.2.0

Kibana and Logstash use a server together

  • Server Configuration: 2 Core 4G, System Disk 40G Solid State Hard Disk

Kibana Standalone Server

Migrate Kibana from the Elasticsearch node and install it using RPM.

1. Install Kibana (RPM mode)

Download and install the public signature key

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add yum source repository configuration

vim /etc/yum.repos.d/kibana.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Install Kibana

yum install kibana

2. Kibana configuration and boot-up

kibana.yml

vim /etc/kibana/kibana.yml
# HTTP Access Port
server.port: 5601

# HTTP Access IP, both Intranet IP and External IP can be accessed
server.host: "0.0.0.0"

# Elasticsearch Node Address (Currently only single address is supported)
elasticsearch.hosts: ["http://172.18.112.10:9200"]

# Elasticsearch account and password
elasticsearch.username: "elastic"
elasticsearch.password: "elasticpassword"

# Internationalization of Kibana Web Pages [Simplified Chinese]
i18n.locale: "zh-CN"

Start Up

systemctl daemon-reload
systemctl enable kibana.service

Start and close

systemctl start kibana.service
systemctl stop kibana.service
systemctl status kibana.service
systemctl restart kibana.service

View the Kibana website

ip:5601

3. Kibana directory structure

Type Description Default Location
home Home directory of Kibana installation or $KIBANA_HOME /usr/share/kibana
bin Binary script directory. Includes starting the Kibana server and installing the kibana-plugin plugin /usr/share/kibana/bin
config Profile directory. Include kibana.yml /etc/kibana
data Kibana data directory. Where Kibana and its plugins write to disk data files /var/lib/kibana
optimize Transparent source code. Some administrative actions, such as plug-in installation, cause source code to be retransmitted at run time /usr/share/kibana/optimize
plugins Plug-in file location. Each plug-in will be included in a subdirectory /usr/share/kibana/plugins

Elasticsearch Index Management

# Create a new index and initialize the fields
PUT index_t_settlement_info
{
  "settings": {
    "index": {
      "number_of_shards": 5,
      "number_of_replicas": 1
    }
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "long"
      }
    }
  }
}

# Create index alias
POST /_aliases
{
    "actions": [
        {"add": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# Alias switches to another index (this is an atomic operation)
POST /_aliases
{
    "actions": [
        {"remove": {"index":"index_t_settlement_info","alias":"t_settlement_info"}}
        {"add": {"index":"new_index_t_settlement_info","alias":"t_settlement_info"}}
    ]
}

# Add field to index (field cannot be modified)
PUT index_t_settlement_info
{
  "mappings": {
      "properties": {
        "user_id": {
          "type": "keyword"
        }
      }
  }
}

Logstash installation configuration

1. Install Logstash (RPM mode)

Download and install the public signature key

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add yum source repository configuration

vim /etc/yum.repos.d/logstash.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Install Logstash

yum install logstash

2. Logstash configuration and boot-up

logstash.yml

vim /etc/logstash/logstash.yml
# Enable timed reload configuration
config.reload.automatic: true
# Timely reload configuration cycle
config.reload.interval: 3s

# Persistent Queue
queue.type: persisted
# Control Durability
queue.checkpoint.writes: 1
# Dead Letter Queue
dead_letter_queue.enable: true

# Enable Logstash node monitoring
xpack.monitoring.enabled: true
# Elasticsearch account and password
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elasticpassword
# Elasticsearch Node Address List
xpack.monitoring.elasticsearch.hosts: ["172.18.112.10", "172.18.112.11", "172.18.112.12"]
# Discover other nodes in the Elasticsearch cluster
xpack.monitoring.elasticsearch.sniffing: true
# Frequency of sending monitoring data
xpack.monitoring.collection.interval: 10s
# Enable monitoring pipeline information
xpack.monitoring.collection.pipeline.details.enabled: true

Start Up

systemctl daemon-reload
systemctl enable logstash.service

Start and close

systemctl start logstash.service
systemctl stop logstash.service
systemctl status logstash.service
systemctl restart logstash.service

3. Logstash directory structure

Type Description Default Location
home Home directory for Logstash installation /usr/share/logstash
bin Binary script directory. Includes starting the Logstash server and installing the logstash-plugin plug-in /usr/share/logstash/bin
settings Profile directory. Include logstash.yml, jvm.options, and startup.options /etc/logstash
config Logstash Pipeline Profile Directory /etc/logstash/conf.d/*.conf
logs Log File Directory /var/log/logstash
plugins Local non-Ruby-Gem plug-in file. Each plug-in is contained in a subdirectory. Recommendations for development only /usr/share/logstash/plugins
data Data file location for logstash and its plug-ins for any persistence requirement /var/lib/logstash

4. Mysql data import into Elasticsearch

1. Download and install the Java Mysql driver package

Download mysql-connector-java compatible with the corresponding version of mysql. Jar driver package

  • Mysql version: 5.7.20-log
  • Driver package version: mysql-connector-java-5.1.48.tar.gz (you can choose 5.1. * other latest versions)
  • Official download address: dev.mysql.com/downloads/connector/... (Click Looking for previous GA versions? Choose another older version)
  • System Compatible Version: Choose Platform Independent or Platform Independent Version

New java driver package storage directory

mkdir /usr/share/logstash/java

Upload mysql-connector-java.jar driver package

mv mysql-connector-java-5.1.48-bin.jar /usr/share/logstash/java

Modify java directory and subdirectory file owners

chown -R logstash:logstash /usr/share/logstash/java

2. Task configuration (location/etc/logstash/conf.d/*.conf)

  • Mysql imports the specific configuration of Elasticsearch, one task and one configuration file
  • Conf.d/*. After the conf configuration is modified, logstash does not need to be restarted, and it refreshes automatically (3 seconds)

New t_settlement_info directory, where specific tasks are stored independently

mkdir /etc/logstash/conf.d/t_settlement_info

Create/t_settlement_info.conf configuration

vim /etc/logstash/conf.d/t_settlement_info/t_settlement_info.conf
input {
  jdbc {
    id => "t_settlement_info.input_jdbc"
    #Database Driven Path (mkdir/usr/share/logstash/java)
    jdbc_driver_library => "/usr/share/logstash/java/mysql-connector-java-5.1.48-bin.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    #Database Connection Related Configuration
    jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/database"
    jdbc_user => "mysql_user"
    jdbc_password => "mysql_password"
    #Task schedule, how often to execute, once a minute here
    schedule => "* * * * *"
    #Executed sql statement
    statement => "SELECT * FROM t_settlement_info WHERE id > :sql_last_value ORDER BY id LIMIT 10000"
    #Whether to clear previous running state
    clean_run => false
    #Enable tracing, if true, then you need to specify tracking_column, default is timestamp
    use_column_value => true
    #Specify the field of the trace, where I set the field of the trace to id
    tracking_column => "id"
    #The type of tracking field, currently only numeric and timestamp, defaults to numeric
    tracking_column_type => "numeric"
    #Record the results of the last run
    record_last_run => true
    #Where to save the results of the above run (mkdir/usr/share/logstash/last-run-metadata)
    last_run_metadata_path => "/usr/share/logstash/last-run-metadata/.logstash_jdbc_last_run.t_settlement_info"
    #Whether to lower case the field name, not when the field is already lower case
    lowercase_column_names => false
  }
}

output {
  elasticsearch {
    id => "t_settlement_info.output_elasticsearch"
    hosts => ["172.18.112.10","172.18.112.11","172.18.112.12"]
    index => "t_settlement_info"
    action => "update"
    doc_as_upsert => true
    document_id => "%{id}"
    user => "elastic"
    password => "elasticpassword"
  }
}

Create a last-run-metadata directory that separately records the last run trace field for each persistent queue (logstash uses only one file record by default)

# new directory
mkdir /usr/share/logstash/last-run-metadata

# Modify directory owner
chown -R logstash:logstash /usr/share/logstash/last-run-metadata

3. Pipeline configuration (location/etc/logstash/pipelines.yml)

  • Pipelines. After the YML configuration is modified, logstash does not need to be restarted, and logstash automatically refreshes periodically (3 seconds)

Custom Pipeline Configuration

vim /etc/logstash/pipelines.yml
# Default pipeline, where multiple tasks work together in a queue, competing to sort the execution before the task. Temporarily close the default pipeline so that inefficient tasks can be opened
#- pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"

# Single task uses a queue independently
- pipeline.id: t_settlement_info
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_info.conf"

# Single task uses a queue independently
- pipeline.id: t_settlement_multi_info
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_multi_info.conf"

# Single task uses a queue independently
- pipeline.id: t_settlement_slot_info
  path.config: "/etc/logstash/conf.d/t_settlement_info/t_settlement_slot_info.conf"

Topics: ElasticSearch