zepplin installation configuration and summary of all command failures

Posted by ChaosKnight on Fri, 28 Jan 2022 05:00:43 +0100

zeppelin installation configuration

Installation premise: before installing zeppelin, make sure your hadoop and hive are on. Oh, this is very important!!!

First step

Put the downloaded compressed package under / opt/download/hadoop directory (I put it here, and you can choose by yourself)

cd /opt/download/hadoop
ls

Then just drag it

Step two

1. Decompress

Unzip the compressed package to the / opt/software directory (zeppelin-0.8.2-bin-all.tgz)

tar -zxvf /opt/download/hadoop/zeppelin-0.8.2-bin-all.tgz /opt/software

2. Rename

Change package name

cd /opt/software
mv zeppelin-0.8.2-bin-all zeppelin082

Step 3

1. Change Zeppelin env Sh.template and Zeppelin site. xml. Name of template

cd /opt/software/zeppelin082/conf
mv zeppelin-env.sh.template zeppelin-env.sh
mv zeppelin-site.xml.template zepplin-site.xml

2. For Zeppelin env SH for configuration

vim zeppelin-env.sh

Add the following contents: (it is better to open another window for easy operation)

export JAVA_HOME=/opt/software/java/jdk-------Added is java of jdk route
export HADOOP_CONF_DIR=/opt/software/hadoop2101------Added is hadoop File path for

3. Yes, Zeppelin site XML for configuration

vim zeppelin-site.xml

Make changes in two places in the following:

<property>
  <name>zeppelin.server.addr</name>
  <value>192.168.145.180</value>-----------------------First: change your ip
  <description>Server binding address</description>
</property>

<property>
  <name>zeppelin.server.port</name>
  <value>8000</value>--------------------------------The second place is changed to 8000
  <description>Server port.</description>
</property>

Step 4

Modify hdfs folder permissions associated with hive

hdfs dfs -chmod -R 777 /tmp

Step 5

1. Disposition

Add #zeppelin's environment variables: find the file where you configured the environment variables before. My side is in / etc / profile d/myenv. SH inside

# jdk
export JAVA_HOME=/opt/software/java/jdk
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

#hive238
export HIVE_HOME=/opt/software/hive238
export PATH=$HIVE_HOME/bin:$PATH

# hadoop
export HADOOP_HOME=/opt/software/hadoop2101
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

#sqoop
export SQOOP_HOME=/opt/software/sqoop146
export PATH=$PATH:$SQOOP_HOME/bin
export LOGDIR=$SQOOP_HOME/logs/

#zepplin ---------------------------------------------- environment variable
export ZEPPELIN_HOME=/opt/software/zeppelin082
export PATH=$ZEPPELIN_HOME/bin


-----------------------------------------------------Add environment variable address
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

Note: some friends here may suddenly find that my order has suddenly failed and cannot be executed. No matter what you knock, it cannot be executed. Don't panic. Here are two ways to help you solve it.

The first solution

Directly enter the following in the linux command line interface, and then enter (import environment variables and the storage address of common shell commands)

See the last line of the figure above for the detailed code

The second way

The second method is a little rigid. When all system commands cannot be executed, we operate through the absolute command vi command.

 /bin/vi  /etc/profile.d/myenv.sh

2. Well, don't forget to activate it after configuration

source /etc/profile

Step 6

Enter the bin directory of zeppelin and start zeppelin

zeppelin-daemon.sh start

At this point, you can enter your ip+8000 on the browser and enter the zeppelin page

--For example: 192.168.145.180: 8000

Step 7

Configure hive interpreter

1. Copy hive site XML to the conf of zeppelin

cp /opt/software/hive238/conf/hive-site.xml /opt/software/zeppelin082/conf

2. Copy of jar package (a little smaller, it is recommended to use tab)

cd /opt/software/zeppelin082/interpreter/jdbc
cp /opt/software/hadoop2101/share/hadoop/common/hadoop-common-3.1.3.jar  /opt/software/zeppelin082/interpreter/jdbc
cp /opt/software/hadoop2101/lib/hive-jdbc-3.1.2.jar ./
cp /opt/software/hadoop2101/lib/hive-common-3.1.2.jar ./
cp /opt/software/hadoop2101/lib/hive-serde-3.1.2.jar ./
cp /opt/software/hadoop2101/lib/hive-service-rpc-3.1.2.jar ./
cp /opt/software/hadoop2101/lib/hive-service-3.1.2.jar ./
cp /opt/software/hadoop2101/lib/curator-client-2.12.0.jar ./

Step 8

The following is the operation on the web

1. Open web page - > anonymous - > interpreter - > create in the upper right corner

2. Configure content

Interpreter Name => hive238--------------It's yours hive name
Interpreter group => jdbc
default.driver =>org.apache.hive.jdbc.HiveDriver
default.url =>jdbc:hive2://192.168.145.180: 10000
default.user =>root

3. Restart

In the interpreters list, find hve238 - > restart in the upper right corner

4. Create NoteBook

Find the Notebook next to the zeppelin icon in the upper left corner of the web page and click the drop-down arrow – > create new note

Notename = > Custom

Default Interpreter => hive238

Create

-----------------------
%hive238
select * 
---------------------

Topics: Big Data Hadoop