Hbase configuration of stand-alone version and pseudo distributed version and solutions to common problems

Posted by Dynamis on Tue, 16 Nov 2021 17:13:12 +0100

Hbase configuration of stand-alone version and pseudo distributed version and solutions to common problems

Download Hbase

The Hbase version used by the author is 2.3.7, which is applicable to hadoop-2.10.x and hadoop-3.x. The following is the image download website:

Click here to jump to the image download website of Hbase-2.3.7

Download the compressed package of * * - bin * *.

Configuring and testing Hbase

  1. First, use the tar command to decompress the Hbase to any directory (recommended home directory: easy to find). Putting it in the * * / usr/local * * directory will cause permission problems, which will be described below. When decompressing to a non current directory, remember to add a parameter - C, for example:

    tar -zxf Hbase-2.3.7.tar.gz -C ~
    
  2. After decompression, configure the environment variables/ Either / etc/profile or ~ /. bashrc. For example:

    # In / etc/profile or ~ /. bashrc
    export HBASE_HOME=~/hbase-2.3.7
    export PATH=$PATH:${HBASE_HOME}/bin
    
  3. Use the source command to validate the configuration file you just configured.

    source /etc/profile
    source ~/.bashrc
    
  4. At this point, if you want to use stand-alone Hbase, the configuration is over. You can view the Hbase version using the following command:

    cd ~/hbase-2.3.7
    ./bin/hbase version
    

    If there are the following errors (or warnings) in the first two lines during version viewing:

    /Usr / local / Hadoop / libexec / Hadoop functions.sh: Line 2366: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name
    /Usr / local / Hadoop / libexec / Hadoop functions.sh: Line 2461: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: invalid variable name

    !!!! Don't panic!!!!

    resolvent:

    Open the hbase-env.sh file under ~ / hbase-2.3.7/conf (modify the directory according to your own directory) and edit it:

    Find the following code in the last few lines:

    ​ export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"

    Uncomment him.

  5. At this point, you can execute the following code to start HMaster:

    cd ~/hbase-2.3.7
    ./bin/start-hbase.sh
    # Then use the jps command to see if HMaster is successfully started
    jps
    # If HMaster is started normally, the stand-alone Hbase can enter the shell for operation
    ./bin/hbase shell
    # After entering the hbase shell, you can use the following statement to check whether it can operate normally
    >create 'tablename','info'
    >list
    
  6. The following is the configuration of pseudo distributed Hbase:

    # Open the conf folder under the corresponding hbase directory
    cd ~/hbase-2.3.7/conf
    # Edit the hbase-env.sh file
    sudo vim hbase-env.sh
    # Add the following three lines of code in the header, JAVA_HOME and Hadoop paths are set according to their own locations
    export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
    export HBASE_CLASSPATH=~/hadoop-2.10.1/etc/hadoop
    export HBASE_MANAGES_ZK=true
    # It is also recommended to uncomment the following code at the end of the file to avoid the error of invalid variable name
    export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"
    
  7. Press esc and enter: wq exit and save the edit.

  8. Then open the hbase-site.xml file for editing:

    # Add the following code to the configuration parameter
    <property>
    	<name>hbase.rootdir</name>
    	# Because it is a pseudo distributed hbase, it needs to be placed in the corresponding hdfs pseudo distributed cluster
    	# The port number of hdfs needs to be set to 9000
    	<value>hdfs://localhost:9000/hbase</value>
    </property>
    <property>
    	<name>hbase.cluster.distributed</name>
    	<value>true</value>
    </property>
    
  9. After setting, save and exit editing.

  10. Then switch to the corresponding hadoop folder and start the hdfs cluster:

    cd ~/hadoop-2.10.1
    ./sbin/start-dfs.sh
    # Use jps to check whether the corresponding namenode and datanode are started successfully
    jps
    
  11. After the hdfs pseudo distributed cluster is started successfully, switch to the corresponding Hbase folder:

    cd ~/hbase-2.3.7
    ./bin/start-hbase.sh
    ./bin/hbase shell
    # After entering the hbase shell interface, test whether it can operate normally
    >create 'tablename','info'
    >list
    

Error report summary

  1. Error in variable name: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: invalid variable name

    resolvent:

    Open the hbase-env.sh file under ~ / hbase-2.3.7/conf (modify the directory according to your own directory) and edit it:

    Find the following code in the last few lines:

    ​ export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"

    Uncomment him.

  2. port 22: connection rejected error:

    Solution: install and set ssh password free login

    sudo apt-get install openssh-server
    # Enter the following command, then enter yes, and then enter the password to log in
    ssh localhost
    # Generate the public key and private key (always enter by default), and the file ID will be generated in the ~ /. ssh folder_ RSA: private key, id_rsa.pub: public key
    ssh-keygen -t rsa
    # Import the public key to the authentication file and import the local machine
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
    
  3. NoNode /hbase/master error:

    resolvent:

    # Enter jps to check whether the HMaster node is started and whether the hdfs pseudo distributed cluster is started
    jps
    # Start HMaster
    cd ~/hbase-2.3.7
    ./bin/start-hbase.sh
    jps # See if HMaster started successfully
    # Start hdfs pseudo distributed cluster
    cd ~/hadoop-2.10.1
    ./sbin/start-dfs.sh
    jps # Check whether the hdfs pseudo distributed cluster is started successfully
    # Finally, enter the hbase shell for testing, and you can operate normally
    cd ~/hbase-2.3.7
    ./bin/hbase shell
    >create 'tablename','info'
    >list
    

Other questions are welcome and I will try my best to solve them.

Topics: Java Hadoop HBase