Hive installation, deployment and management

Posted by Homer30 on Sun, 20 Feb 2022 04:37:39 +0100

Hive installation, deployment and management

Experimental environment

Linux Ubuntu 16.04
prerequisite:
1) Java runtime environment deployment completed
2) Hadoop 3.0.0 single point deployment completed
3) MySQL database installation completed
The above preconditions are ready for you.

Experimental content

Under the above preconditions, complete the installation, deployment and management of hive

Experimental steps

1. Click "command line terminal" to open a new window
2. Unzip the installation package

We have downloaded Hive's installation package for you in advance. You can directly run the following command to extract the installation package.

sudo tar -zxvf /data/hadoop/apache-hive-2.3.2-bin.tar.gz -C /opt/

After decompression, the apache-hive-2.3.2-bin folder is generated in the / opt directory

3. Change folder name and user

Change folder name

sudo mv /opt/apache-hive-2.3.2-bin/ /opt/hive

Change users and user groups

sudo chown -R dolphin:dolphin /opt/hive/

4. Set HIVE_HOME environment variable

Set "/ opt/hive" to HIVE_HOME environment variable as working directory

sudo vim ~/.bashrc

Add the following at the bottom of the newly pop-up editor:

export HIVE_HOME=/opt/hive
export PATH=$PATH:$HIVE_HOME/bin

Run the following command to make the environment variable effective

source ~/.bashrc

5. Import the MySql jdbc jar package into the hive/lib directory

Copy the jar package to the / app/hive/lib directory

sudo cp /data/hadoop/mysql-connector-java-5.1.7-bin.jar /opt/hive/lib/

Change the user and user group to which the jar package belongs

sudo chown dolphin:dolphin /opt/hive/lib/mysql-connector-java-5.1.7-bin.jar

6. Modify hive configuration file

Enter the / opt/hive/conf directory

cd /opt/hive/conf

Set hive default xml. Rename the template file to hive default xml

sudo mv hive-default.xml.template hive-default.xml

Create hive site XML file

sudo touch hive-site.xml

After execution, hive site will be generated under / opt/hive/conf / XML file

Modify hive site XML file

sudo vim hive-site.xml

Add the following to the pop-up editor:
(tip: you can copy the contents of hive-site.txt on the desktop to hive-site.xml file)

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive_metadata?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
</configuration>

So far, hive has been configured

7. Start MySQL

hive's metadata needs to be stored in a relational database. Here, we choose Mysql
MySql (account name root, password 123456) has been installed in advance on this experimental platform. Here, you only need to start the MySql service

sudo /etc/init.d/mysql start

Successful startup is shown as follows

dolphin@tools:~$ sudo /etc/init.d/mysql start
* Starting MySQL database server mysqld
No directory, logging in with HOME=/
[ OK ]
8. Specify metadata database type and initialize Schema

schematool -initSchema -dbType mysql

After successful initialization, the effect is as follows:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:mysql://localhost:3306/hive_metadata?createDatabaseIfNotExist=true
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:   root
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed
9. Start Hadoop

Enter the / opt/hadoop/bin directory

cd /opt/hadoop/sbin

Execute startup script

./start-all.sh

Verify whether hadoop is started successfully

jps

dolphin@tools:/opt/hadoop/sbin$ jps
2258 ResourceManager
2020 SecondaryNameNode
1669 NameNode
1787 DataNode
2731 Jps
2556 NodeManager

The above six processes are started, indicating that Hadoop is started successfully

10. Start hive

hive

After successful startup, the display effect is as follows

dolphin@tools:/opt/hadoop/sbin$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
 
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-2.3.3.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
11. Check whether hive can be used

Execute show databases on the hive command line; Command to display which databases are available. The display effect is as follows

hive> show databases;
OK
default
Time taken: 3.06 seconds, Fetched: 1 row(s)

As indicated above hive The installation and deployment are successful. This experiment is over

s.
hive>
11. Check whether hive can be used

Execute show databases on the hive command line; Command to display which databases are available. The display effect is as follows

hive> show databases;
OK
default
Time taken: 3.06 seconds, Fetched: 1 row(s)

As indicated above hive Installation and deployment succeeded.

Topics: Database Big Data Hadoop