How to view Flink job execution plan
When the requirements of an application are relatively simple, there may not be many operators involved in data conversion, but when the requirements of the application become more and more complex, the number of operators in a Job may reach dozens or even hundreds. With so many operators, the whole application will become very complex, So it w ...
Posted by navtheace on Sun, 05 Sep 2021 02:17:18 +0200
Real time log analysis
Germeng Society
AI: Keras PyTorch MXNet TensorFlow PaddlePaddle deep learning practice (updated from time to time)
4.4 real time log analysis
Learning objectives
target
Master the connection between Flume and Kafka
We have collected the log data into hadoop, but when doing real-time ana ...
Posted by pontiac007 on Thu, 18 Jun 2020 06:33:47 +0200
Spring Boot Principle Deep-Dependent Management
2.2 Deep Principle
The traditional Spring framework implements a WEB service, which needs to import various dependent JAR packages, then write corresponding XML configuration files, etc. Compared with Spring Boot, it is more convenient, fast and efficient.So how does Spring Boot really do that?
2.2.1 Dependency Management
Question: Why doesn't ...
Posted by joshi_v on Mon, 15 Jun 2020 21:30:25 +0200
ELK log system theory and several schemes
Log system
scene
In general, we need to do log analysis scenarios: directly in the log file, grep and awk can get the information they want. However, in large-scale scenarios, this method is inefficient, facing problems such as how to archive too much logs, how to do too slow text search, and how t ...
Posted by mschrank on Sat, 13 Jun 2020 13:10:05 +0200
zabbix alarm information pushed to kafka
Application scenario
Due to the high security requirements of the company where the friend is located, zabbix's network environment cannot be connected to the Internet, so it is not possible to send the alarm directly to some instant messaging tools through zabbix, which requires sending the alarm message to some middleware and forwarding it t ...
Posted by lanas on Tue, 26 May 2020 19:11:42 +0200
Kafka Core API - Consumer Consumer
Automatic Submission from Consumer
stay Above The use of the Producer API is described. Now that we know how to send messages through the API to Kafka, the producer/consumer model is one less consumer.Therefore, this article will introduce the use of the Consumer API, which consumes messages from Kafka to make an app a consumer role.
As always ...
Posted by itisprasad on Sun, 24 May 2020 20:58:35 +0200
Summary and solution of various errors reported by Flink
Table is not an append-only table. Use the toRetractStream() in order to handle add and retract messages.
This is because the dynamic table is not in the append only mode. It needs to be processed with to retrieve stream
tableEnv.toRetractStreamPerson.print()
Today, when you start the Flink task, an error was reported as "Caused by: jav ...
Posted by CaseyC1 on Thu, 07 May 2020 10:54:41 +0200
Interpretation of FlinkKafkaConsumer011 method
FlinkKafkaConsumer011.setStartFromEarliest()
/**
*Specifies that consumer reads from the earliest offset of all partitions.
*This method causes consumer to ignore any submitted offset s in Zookeeper / Kafka brokers.
*
*When consumer is restored from checkpoint or savepoint, this method does not work. In this case, the offset in the r ...
Posted by malec on Fri, 10 Apr 2020 13:33:32 +0200
Spring boot Kafka advanced version development
Introduce dependency
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<versi ...
Posted by gnetcon on Mon, 30 Mar 2020 21:51:49 +0200
Download, installation and configuration manual of Kafka's Confluent Platform in multi node cluster environment
1, introduction
This topic provides instructions for installing the Confluent platform configuration that can be used for production in a multi node environment using the replicated ZooKeeper ensemble.
With this installation method, you can manually connect to each node, download the archive file, and ...
Posted by lomokev on Thu, 27 Feb 2020 08:46:28 +0100