Article directory
In the previous article, we have discussed how to write hadoop under virtual machine. On this basis, due to the particularity of cloud server, we have the following content.
Cluster SSH password free login settings
- hosts file settings
vi /etc/hosts add content
Local intranet ip master Slave ip slave
- hostname modification
- ssh settings
https://blog.csdn.net/No_Game_No_Life_/article/details/87969819#Hadoop_75
Hadoop installation configuration
Open port
50070 9000 9001 8088 10020
To configure
- Configuration on Master
vi etc/hadoop/slaves
slave localhost #This can be left blank
vi etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> </configuration>
vi etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///root/hadoop/tmp/dfs/name</value> </property> </configuration>
vi etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
vi etc/hadoop/yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
vi etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/java/jdk1.8.0_162
- Configuration on Slave
vi etc/hadoop/slaves
slave localhost #This can be left blank
vi etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> </configuration>
vi etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///root/hadoop/tmp/dfs/data</value> </property> </configuration>
vi etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> </configuration>
vi etc/hadoop/yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/java/jdk1.8.0_162
Format HDFS (Master, Slave)
hadoop namenode -format
Start Hadoop
sbin/start-all.sh
Start job history server
sbin/mr-jobhistory-daemon.sh start historyserver
Verify installation succeeded
See for details https://blog.csdn.net/No_Game_No_Life_/article/details/87969819