CDH5 installing Kerberos authentication
BUG
BUG is written in front: Kerberos 1.15 1-18. el7. x86_ Version 64 has a BUG, do not install this version!!!! If you have installed the version described above, don't be afraid. Here is a solution Upgrade kerberos
1. System environment
1. Operating system: CentOS Linux release 7.5 1804 (Core) 2. CDH: 5.16.2-1.cdh5.16.2.p0.8 3. Kerberos: 1 ...
Posted by elum.chaitu on Sat, 01 Jan 2022 04:23:06 +0100
2021-12-30 the 58th step towards the program
catalogue
1, Introduction to azkaban
2, System architecture of azkaban
3, Installation mode of azkaban
3.1 Solo Server installation
3.1. 1 Introduction to solo server
3.1. 2 installation steps
3.2 installation method of multi exec server
3.2. 1 node layout
3.2. 2. Configure mysql
3.2. 3. Configure web server
3.2. 4. Configure exec se ...
Posted by evolve4 on Sat, 01 Jan 2022 04:07:23 +0100
Call MapReduce to count the occurrence times of each word in the file
Note: the places that need to be installed and configured are in the final reference materials
1, Upload the files to be analyzed (no less than 100000 English words) to HDFS
demo.txt is the file to be analyzed
Start Hadoop
Upload the file to the input folder of hdfs
Ensure successful upload
2. Call MapReduce to count the n ...
Posted by crazytoon on Fri, 31 Dec 2021 05:40:31 +0100
Hive compression and storage
Hive compression and storage
1, Hadoop compression configuration
1.1 MR supported compression coding
Compression coding
In order to support a variety of compression / decompression algorithms, Hadoop introduces an encoder / decoder, as shown in the following table:
encoder
Comparison of compression performance
performance comparison ...
Posted by sparrrow on Thu, 30 Dec 2021 11:08:41 +0100
hadoop installation and deployment
Download and install
Official website https://hadoop.apache.org/
System CentOS7
Download and install hadoop-3.1 3.tar. gz
Installation decompression path / opt/module
Also configure environment variables
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
Three server ...
Posted by ma5ect on Wed, 29 Dec 2021 15:04:18 +0100
Big data - how to use Hadoop on Docker
Introduction since Hadoop is a software designed for clustering, it is inevitable to configure Hadoop on multiple machines in the process of learning and using, which will cause many obstacles for beginners. There are two main obstacles;
Expensive computer clusters. A cluster environment composed of multiple computers requires ex ...
Posted by titoni on Tue, 28 Dec 2021 23:21:47 +0100
Big data learning tutorial SD version Chapter 9 [Flume]
Flume log collection tool is mainly used since it is a tool!
Distributed acquisition processing and aggregation streaming framework
A tool for collecting data by writing a collection scheme, that is, a configuration file. The configuration scheme is in the official document
1. Flume architecture
Agent JVM process
Source: receive data ...
Posted by mikeylikesyou on Tue, 28 Dec 2021 16:52:10 +0100
Flume cluster installation and deployment, flume entry operation cases: Official cases of monitoring port data and real-time monitoring of multiple additional files in the specified directory
Introduction: This is a learning note blog about the installation and deployment of flume. The main contents include: flume installation and deployment and two entry cases of flume. They are: the official case of monitoring port data and the file changes tracked by multiple files in the specified directory in real time. If there are mistakes, p ...
Posted by 3.grosz on Tue, 28 Dec 2021 09:57:51 +0100
Hadoop environment installation
Hadoop distributed environment
0. Preliminary preparation
Create normal user
# Create fzk user
useradd fzk
# Modify fzk user's password
passwd fzk
# The configuration fzk user has root permission, which is convenient for sudo to execute the command with root permission later (/ etc/sudoers file, added under% wheel)
fzk ALL=(ALL) ...
Posted by phelpsa on Sun, 26 Dec 2021 10:23:42 +0100
Detailed use of HDFS
HDFS
1. Shell operation
upload
-moveFromLocal: cut and paste from local to HDFS
hadoop fs -moveFromLocal local file HDFS directory -Copy from local: copy files from the local file system to the HDFS path
hadoop fs -copyFromLocal local file HDFS directory -Put: equivalent to copyFromLocal, the production environment is more used to ...
Posted by alcedema on Sat, 25 Dec 2021 17:56:36 +0100