elastic: https://www.elastic.co/
In order to facilitate centralized viewing of the business logs of multiple hosts, we use Filebeat, Redis, Logstash to collect:
(1) Filebeat monitors the changes of log files and writes the new part into redis. The log of each line is the data of a list set of specified key s in redis.
(2) LogStash monitors the list data changes of the specified key in redis, reads the data, and persists to disk files.
Reference: https://jkzhao.github.io/2017/10/24/Filebeat Log Collector
Redis installation:
Reference:
(1) http://www.redis.cn/download.html
(2) https://zhuanlan.zhihu.com/p/34527270
1. Select the version you want to download: wget http://download.redis.io/releases/redis-4.0.11.tar.gz
2. Unzip installation package: tar xvf redis-4.0.11.tar.gz
3. Compile and install: Execute commands (1)make and (2)make install:
Note: make install actually adds these redis command files to the / usr/local/bin directory for ll /usr/local/bin viewing
4. Modify the daemonize of redis.conf to yes;
5. Start the reids service:
cd /usr/local/bin, ./redis-server /opt/apps/redis-4.0.11/redis.conf
6. Look at the process: ps-ef | grep redis
7. Local Client: cd/usr/local/bin,. / redis-cli, log in and make sure it is available. (If the port is changed. / redis-cli-p port)
Filebeat installation (host of business system log):
Reference resources:
(1) https://www.elastic.co/downloads/beats/filebeat
(2) https://blog.csdn.net/vip100549/article/details/79657574
1. Select the required version to download: wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-linux-x86_64.tar.gz
2. Unzip installation package: tar-xvf filebeat-6.4.2-linux-x86_64.tar.gz
3. Execution under directory: nohup. / filebeat-e-c filebeat.yml&
logstash installation:
(1) Download the installation package and extract it: https://www.elastic.co/downloads/logstash
(2) Prepare a configuration file that needs to be loaded: read data from redis and call it logstash_redis.conf
(3) Execute the startup command nohup. /bin/logstash-f. / config/logstash_redis.conf&
To configure:
1. Installation is very simple, just look at the instructions of the official website, the most important thing is the configuration method.
2. Filebeat configuration:
Functions: Read multiple log files on disk, put different key s into redis, facilitate logstash to read separately;
Example:
#=========================== Filebeat inputs ============================= filebeat.inputs: - type: log # Change to true to enable this prospector configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /opt/logs/service1/base.log fields: log_topics: M100_service1_baselog log_ip: 192.168.1.100 scan_frequency: 1s - type: log enabled: true paths: - /opt/logs/service2/base.log fields: log_topics: M101_service2_baselog log_ip: 192.168.1.101 scan_frequency: 1s #============================= Filebeat modules =============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 3 #index.codec: best_compression #_source.enabled: false #============================= File output ============================= #output.file: # path: "/tmp/logs" # filename: 'outputFile.txt' #============================= Redis output ============================= output.redis: hosts: ["192.168.1.200:6379"] #password: "" key: "%{[fields.log_topics]}"
3. logstash configuration:
input { ## Read service 1 log redis { data_type => "list" key => "M100_service1_baselog" host => "192.168.1.100" port => 6380 threads => 2 type => "M100_service1_baselog" } redis { data_type => "list" key => "M101_service1_baselog" host => "192.168.1.101" port => 6380 threads => 2 type => "M101_service1_baselog" } ## Read service 2 logs redis { data_type => "list" key => "M110_service2_baselog" host => "192.168.1.110" port => 6380 threads => 2 type => "M110_service2_baselog" } redis { data_type => "list" key => "M111_service2_baselog" host => "192.168.1.111" port => 6380 threads => 2 type => "M111_service2_baselog" } } output { ## Output service 1 log if [type] == "M100_service1_baselog" { file { path => "/opt/logs/logstash/service1_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } else if [type] == "M101_service1_baselog" { file { path => "/opt/logs/logstash/service1_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } ## Output service 2 logs else if [type] == "M111_service2_baselog" { file { path => "/opt/logs/logstash/service2_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } else if [type] == "M111_service2_baselog" { file { path => "/opt/logs/logstash/service2_baselog-%{+YYYY.MM.dd}.log" codec => line { format => "[%{[fields][log_ip]}].%{message}"} } } }
Note: In this way, the logs of several service1 hosts are exported to a file. The same is true of service 2.