ELK-7.3 Local Deployment
1. Introduction
1,logstash
Logstash is a data analysis software designed to analyze log logs.The whole set of software can be thought of as an MVC model, with logstash as the controller layer, Elasticsearch as the model layer, and kibana as the view layer.First, the data is passed to logstash, which filters and formats the data (converts it to JSON format), then to Elasticsearch to store and index the search. Kibana provides the front-end pages for search and chart visualization, which is the visualization of the data returned by the interface that calls Elasticsearch.Logstash and Elasticsearch are written in Java and kibana usesNode.jsFrame.
logstash architecture
When ogstash works, it mainly sets the working properties of three parts.
input: Set the data source
filter: Some processing and filtering can be done on the data, but complex processing logic is not recommended.This step is not required
Output: Set output target
2,elasticsearch
Elasticsearch is a distributed, highly scalable, highly real-time search and Data analysis Engine.It makes it easy to search, analyze, and explore large amounts of data.Take full advantage of Elasticsearch's horizontal scalability to make data more valuable in production environments.The implementation principle of Elasticsearch is mainly divided into the following steps: first, the user submits the data to the Elasticsearch database, then uses the word breaker controller to partition the corresponding sentence, stores its weight and the result of word breaking together into the data, when the user searches for data, ranks the result according to the weight, scores the result, and presents the returned result to the user.
3,kibana
Kibana is an open source analysis and visualization platform for Elasticsearch to search for and view data stored interactively in the Elasticsearch index.With Kibana, advanced data analysis and presentation can be done through a variety of charts.
Kibana makes large amounts of data easier to understand.It is simple to use and a browser-based user interface allows you to quickly create dashboard s that display Elasticsearch query dynamics in real time.
II. Environmental Deployment
1. Deployment Planning
application | Edition | node |
---|---|---|
logstash | 7.3 | 192.168.11.11 |
elasticsearch | 7.3 | 192.168.11.11,192.168.11.12,192.168.11.13 |
kibana | 7.3 | 192.168.11.13 |
Installation Package Download Address: Link: https://pan.baidu.com/s/17BeXEOIAWCTcr-qCI_1cpA Extraction Code: 9uk4
2. Environmental Preparation
(1) Modify file restrictions
vi /etc/security/limits.conf //Additional Content root soft nofile 65535 root hard nofile 65535 * soft nofile 65535 * hard nofile 65535 * soft nproc 65535 * hard nproc 65535 * soft memlock unlimited * hard memlock unlimited
(2) Number of adjustment processes
vi /etc/security/limits.d/20-nproc.conf //Adjust to the following configuration * soft nproc 65535 root soft nproc unlimited
(3) Adjust virtual memory - Maximum concurrent connection
vi /etc/sysctl.conf //Additional Content vm.max_map_count=655360 fs.file-max=655360
(4) Effective after system restart
reboot
(5) Open ports
firewall-cmd --add-port=9200/tcp --permanent firewall-cmd --add-port=9300/tcp --permanent Reload Firewall Rules firewall-cmd --reload or systemctl stop firewalld systemctl enable firewalld
(6) Open creation of storage directories
mkdir /data01/elk/
3. java environment deployment
yum -y install jdk-8u162-linux-x64.rpm #View the java version java -version
4. logstash Deployment
#1. rpm package installation yum -y install logstash-7.3.0.rpm #2. Modify Profile cp logstash.yml{,.bak} #After ChangeLogstash.yml path.data: /data01/elk/logstash/data pipeline.workers: 4 pipeline.batch.size: 1024 path.config: /data01/elk/logstash/config/conf.d path.logs: /data01/elk/logstash/logs http.host: "0.0.0.0" http.port: 9600 #After ChangeStartup.optionsFiles can be selected unchanged LS_PIDFILE=/data01/elk/logstash/logstash.pid LS_GC_LOG_FILE=/data01/elk/logstash/logs/gc.log #You can adjust the parameters appropriately according to the size of host memory vim jvm.options -Xms1g -Xmx1g
Create corresponding path
mkidr -pv /data01/elk/logstash/{data,logs,config} chown -R 1000:1000 /data01/elk/logstash
- Start logstash
systemctl start logstash systemctl enable logstash
5. Elicsearch Deployment
This is the deployment of 3master 3 node
Install elasticsearch
yum -y install elasticsearch
Configuring elasticsearch
#elasticsearch.yml cluster.name: elk node.name: elk02 # Add custom attributes to nodes node.attr.rack: r1 node.master: true #node.voting_only: true node.data: true #node.ingest: true #node.ml: false #Turn on monitoring xpack xpack.monitoring.enabled: true #Machine learning shutdown xpack.ml.enabled: false # Whether to use the http protocol to provide external services, default to true, open #http.enabled: true cluster.remote.connect: false path.data: /data01/elasticsearch/lib,/data02/elasticsearch/lib path.logs: /data01/elasticsearch/logs #Lock the physical memory address to prevent es memory from being swapped out, that is, avoid swap swap partitions for es. Frequent swap can result in higher IOPS bootstrap.memory_lock: true network.host: 192.168.11.12 http.port: 9200 ##Discovery.zen.ping.Unicast.hostsOld Configuration discovery.seed_hosts: ["192.168.11.11","192.168.11.12","192.168.11.13"] #Elasticsearch7 adds a parameter that writes the device address of the candidate primary node so that it can be selected as the primary node when the service is started cluster.initial_master_nodes: ["192.168.11.11","192.168.11.12","192.168.11.13"] # Sets the cluster to restore data when N nodes start up, defaulting to 1. gateway.recover_after_nodes: 2 #Set whether regular or _all Delete or Close Index action.destructive_requires_name: true network.tcp.no_delay: true network.tcp.keep_alive: true network.tcp.reuse_address: true network.tcp.send_buffer_size: 128mb network.tcp.receive_buffer_size: 128mb #transport.tcp.port: 9301 transport.tcp.compress: true http.max_content_length: 200mb #Turn on cross-domain access http.cors.enabled: true http.cors.allow-origin: "*" cluster.fault_detection.leader_check.interval: 15s discovery.cluster_formation_warning_timeout: 30s cluster.join.timeout: 120s cluster.publish.timeout: 90s cluster.routing.allocation.cluster_concurrent_rebalance: 16 cluster.routing.allocation.node_concurrent_recoveries: 16 cluster.routing.allocation.node_initial_primaries_recoveries: 16
The other two just need to be modifiedNode.name,Network.hostFor the corresponding node
Unrestricted init-initiated programs, consistent with system limit
sed -i 39c\LimitMEMLOCK=infinity /usr/lib/systemd/system/elasticsearch.service
Create corresponding directory
mkdir -pv /data01/elk/elasticsearch/{lib,logs} chown -R elasticsearch:elasticsearch /data01/elk/elasticsearch
Start the elasticsearch service
#Overload System Services systemctl daemon-reload systemctl start elasticsearch systemctl enable elasticsearch
6. kibana Deployment
Install kibana
yum -y install kibana-7.3.0-x86_64.rpm
Configure kibana
Note: It is best to modify the configuration after backup
#vim kibana.yml server.port: 5601 server.host: "192.168.11.13" elasticsearch.hosts: ["http://192.168.11.12:9200"] i18n.locale: "zh-CN"
Start kibana
systemctl start kibana systemctl enable kibana
Errors can be troubleshooted by
journalctl -u elasticsearch -f tail -500f /data01/elk/elasticsearch/logs/elk.log