edition
Version upgrade instructions: prevent risks caused by log4j vulnerabilities
Unified Version Description: unify the version to prevent unnecessary accidents
Version selection Description: elasticsearch: 7.16.2
logstash: 7.16.2
file beat: 7.16.2
Download instructions
Enter the official website es official website
Select the required installation package and version to download
filebeat
to configure
Enter filebeat-7.16.2 and open filebeat YML, configuration
###################### Filebeat Configuration Example ######################### # ============================== Filebeat inputs =============================== filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. # filestream is an input for collecting log messages from files. - type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - E:Monitor file path\monitor.data encoding: utf-8 fields: logType: monitorApi fields_under_root: true - type: log enabled: true paths: - Monitor file path\monitor.data encoding: utf-8 fields: logType: monitorErr fields_under_root: true #- c:\programdata\elasticsearch\logs\* # ============================== Filebeat modules ============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes reload.period: 5s # ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 1 #index.codec: best_compression #_source.enabled: false # ================================== General =================================== # ================================= Dashboards ================================= # =================================== Kibana =================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # =============================== Elastic Cloud ================================ # ================================== Outputs =================================== # ---------------------------- Elasticsearch Output ---------------------------- # ------------------------------ Logstash Output ------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"] # ================================= Processors ================================= processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~ # ================================== Logging =================================== # ============================= X-Pack Monitoring ============================== # ============================== Instrumentation =============================== # ================================= Migration ==================================
start-up
###start-up ./filebeat -e -c filebeat configuration file ### Start to generate log files in the background and enter file - *. ** Under the directory nohup ./filebeat -e -c filebeat.yml -d "Publish" & > nohup.out ### Background startup does not generate log ./filebeat -e -c filebeat.yml -d "Publish" >/dev/null 2>&1 &
- The key is that in the last > / dev/null 2 > & 1 part, / dev/null is a virtual empty device (similar to a black hole in Physics). Any output information redirected to the device will sink into the sea
- /dev/null means redirecting the standard output information to the "black hole"
- 2> & 1 means redirecting the standard error to the standard output (since the standard output has been directed to the "black hole", that is, the standard output is also a "black hole", and then directing the standard error output to the standard output is equivalent to that the error output is also directed to the "black hole")
logstash
to configure
Enter logstash-7.16.2/bin; Create profile * * * conf (AAA. conf is created here), configure
input { beats { port => 5044 codec => json } } filter { date { match => [ "operateTime", "yyyy-MM-dd HH:mm:ss" ] target => "operateTime" timezone =>"Asia/Shanghai" } geoip { source => "operatorIp" target => "geoip" database => "/opt/elk-7.16.2/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-7.2.8-java/vendor/GeoLite2-City.mmdb" } } output { #stdout { codec => rubydebug } if[logType]=="monitorApi"{ elasticsearch { #Log indexes are separated by month index => "monitorapi-%{+YYYY.MM}" hosts => ["192.168.8.35:9200"] } }else if[logType]=="monitorErr"{ elasticsearch { #Log indexes are separated by month index => "monitorerr-%{+YYYY.MM}" hosts => ["192.168.8.35:9200"] } }else{ elasticsearch { #Log indexes are separated by month index => "arlog-%{+YYYY.MM}" hosts => ["192.168.8.35:9200"] #document_type => "%{logType}" #document_type => "%{[fields][logType]}" #Type specifies the type configured in filebeat #es enable authentication and add configuration #username => "admin" #password => "admin" } } }
start-up
### Background start/ logstash -f easyweblog.conf ### Start the background to generate log files, and enter logstash - *. ** Under the directory ### I put AAA here for convenience The conf file is placed in the bin directory nohup ./bin/logstash -f bin/aaa.conf & > nohup.out ### Background startup does not generate log ./bin/logstash -f bin/aaa.conf >/dev/null 2>&1 &
elasticsearch
to configure
Enter elasticsearch-7.16.2/config and open elasticsearch YML, configuration
# ======================== Elasticsearch Configuration ========================= # ---------------------------------- Cluster ----------------------------------- # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-1 # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /data/es-7.16.2/data # # Path to log files: # path.logs: /data/es-7.16.2/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # bootstrap.memory_lock: true # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # network.host: 0.0.0.0 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["node-1"] # ---------------------------------- Various ----------------------------------- # ---------------------------------- Security ---------------------------------- #------this confguration is for es-head----- http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true
start-up
Start with a new user, not root
### elasticsearch starts directly ### Start the generation log in the background and enter elastic - *. **/ Under bin directory nohup ./elasticsearch & > nohup.out ### Background startup does not generate log ./elasticsearch >/dev/null 2>&1 &
Problems encountered
1, How to view es data and how to develop
In order to facilitate deployment and debugging, you need to download es head (full name: elasticsearch head master, the version is not required), which can intuitively display the data in ES or write code. The description is as follows:
- Download es head, unzip it, enter elasticsearch head master, and open index Address configuration, HTML, ES http://localhost:9200/ , enter the data, display the index in overview, and see the entered data in data browsing
- During development, you can use the query criteria configured in basic query to correspond to the query criteria set in the code, which can be used as a reference
2, The index cannot be displayed
Solution:
-
Check the filebeat log to see if there is any corresponding file modification monitoring information, such as [input.harvester] and the address of the monitoring file. If so, filebeat reads the new log information
2022-02-11T09:09:56.783+0800 INFO [input.harvester] log/harvester.go:309 Harvester started for file. {"input_id": "c51d15c6-d4c0-4f36-b556-cefc1e8e8340", "source": "E:\\project\\***\\data\\monitorApi\\monitor.data", "state_id": "native::1376256-202864-1710438242", "finished": false, "os_id": "1376256-202864-1710438242", "old_source": "E:\\project\\***\\data\\monitorApi\\monitor.data", "old_finished": true, "old_os_id": "1376256-202864-1710438242", "harvester_id": "e91e944d-0039-48f4-b830-a543d8c11bb0"}
-
Open the logstash log function #stdout {codec = > rubydebug}, and see whether the logstash log has output. In the log, you can see the information to be collected
-
Open es to view the log. There is the index information behind the node to see if it is desired. If not, you need to modify the output content in the logstash configuration file. Note: 1. The syntax of if else is the same as that of java, just remove the conditional parentheses; 2. Pay attention to the reference rules. Square brackets can refer to the fields set in filebeat, and just put the name in it directly
-
If the index is not found, you'd better first check whether there are input records in es, which may be under other indexes. This is to consider step 3 to check the output index configuration.