atlas stand-alone installation

Posted by frao_0 on Wed, 23 Feb 2022 05:20:54 +0100

1, Virtual machine preparation

  1. Update virtual machine, command: yum -y update
  2. Modify the hostname. The command is hostnamectl set hostname atlas
  3. Close the firewall. The command is systemctl stop firewalld Service and systemctl disable firewalld service
  4. reboot

2, Install jdk

  1. Uninstall openjdk, command:
rpm -e --nodeps java-1.7.0-openjdk
rpm -e --nodeps java-1.7.0-openjdk-headless
rpm -e --nodeps java-1.8.0-openjdk
rpm -e --nodeps java-1.8.0-openjdk-headless
  1. Unzip jdk, command:
tar -xzvf jdk-8u161-linux-x64.tar.gz -C /home/atlas/
mv jdk1.8.0_161/ jdk1.8
  1. Configuring environment variables, commands
vim /etc/profile.d/my_env.sh

export JAVA_HOME=/home/atlas/jdk1.8
export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile

3, Configure password free login

  1. Generate key, command: SSH keygen - t RSA
  2. get into. SSH directory, command: CD / root / ssh
  3. Configure password free, command:
cat id_rsa.pub >> authorized_keys
chmod 600 ./authorized_keys

4, Configure handoop 2 seven point two

  1. Unzip the installation package, command:
tar -xzvf hadoop-2.7.2.tar.gz -C /home/atlas/
mv hadoop-2.7.2/ hadoop
  1. Configure environment variables
vim /etc/profile.d/my_env.sh

export HADOOP_HOME=/home/atlas/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

source /etc/profile
  1. Modify core site XML, command:
vim /home/atlas/hadoop/etc/hadoop/core-site.xml


<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/atlas/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
		<name>hadoop.http.staticuser.user</name>
		<value>atguigu</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.groups</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.atguigu.groups</name>
		<value>*</value>
	</property>
</configuration>
  1. Modify HDFS site XML, command:
vim /home/atlas/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/atlas/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/atlas/hadoop/tmp/dfs/data</value>
    </property>
</configuration>
  1. Modify Yard site XML, command:
vim /home/atlas/hadoop/etc/hadoop/yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
  1. Modify mapred site XML, command:
cp /home/atlas/hadoop/etc/hadoop/mapred-site.xml.template /home/atlas/hadoop/etc/hadoop/mapred-site.xml

vim /home/atlas/hadoop/etc/hadoop/mapred-site.xml

<configuration>
    <!-- appoint MR Run in YARN upper -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
  1. Start hadoop, command:
hdfs namenode -format
start-dfs.sh
start-yarn.sh

5, Install mysql

  1. Delete mysql from the system. Command:
rpm -qa|grep mariadb
rpm -e --nodeps mariadb-libs
  1. Decompress the compressed package. The command is tar -xvf mysql-5.7.28-1 el7. x86_ 64.rpm-bundle. tar
  2. To install mysql, command:
rpm -ivh mysql-community-common-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-compat-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.28-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.28-1.el7.x86_64.rpm
  1. Initialize database, command: mysqld --initialize --user=mysql
  2. View the password of the temporarily generated root user. The command is cat / var / log / mysqld log
  3. Start mysql service, command: systemctl start mysqld
  4. Log in to the MySQL database with the command: mysql -uroot -p, and then enter the previous temporary password to enter the database
  5. Modify the password: set password = password("new password");
  6. Modify the root user in the user table under the MySQL database to allow any ip connection. Command 1: update mysql user set host='%' where user='root';, Command 2: flush privileges;

6, Install hive

  1. Unzip the installation package, command:
tar -xzvf apache-hive-3.1.2-bin.tar.gz -C /home/atlas/
mv apache-hive-3.1.2-bin/ hive
  1. To configure environment variables for Hive, command:
vim /etc/profile.d/my_env.sh

export HIVE_HOME=/home/atlas/hive
export PATH=$PATH:$HIVE_HOME/bin

source /etc/profile
  1. Configure driver, command: cp /home/atlas/rar/3_mysql/mysql-connector-java-5.1.37.jar /home/atlas/hive/lib/
  2. Edit hive site XML, command
vim /home/atlas/hive/conf/hive-site.xml

<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
    <!-- jdbc Connected URL --> 
    <property> 
        <name>javax.jdo.option.ConnectionURL</name> 
        <value>jdbc:mysql://localhost:3306/metastore?useSSL=false&amp;useUnicode=true&amp;characterEncoding=UTF-8</value> 
    </property>
 
    <!-- jdbc Connected Driver--> 
    <property> 
        <name>javax.jdo.option.ConnectionDriverName</name> 
        <value>com.mysql.jdbc.Driver</value> 
</property> 
 
 <!-- jdbc Connected username--> 
    <property> 
        <name>javax.jdo.option.ConnectionUserName</name> 
        <value>root</value> 
    </property> 
 
    <!-- jdbc Connected password --> 
    <property> 
        <name>javax.jdo.option.ConnectionPassword</name> 
        <value>970725</value> 
</property> 
 
    <!-- Hive Validation of metadata store version --> 
    <property> 
        <name>hive.metastore.schema.verification</name> 
        <value>false</value> 
</property> 
 
    <!--Metadata store authorization--> 
    <property> 
        <name>hive.metastore.event.db.notification.api.auth</name> 
        <value>false</value> 
    </property> 
</configuration>
  1. Modify HIV Env SH, command:
mv /home/atlas/hive/conf/hive-env.sh.template /home/atlas/hive/conf/hive-env.sh
vim /home/atlas/hive/conf/hive-env.sh
 take#export HADOOP_HEAPSIZE=1024 open
  1. Modify hive-log4j2 Properties, command:
mv /home/atlas/hive/conf/hive-log4j2.properties.template /home/atlas/hive/conf/hive-log4j2.properties
vim /home/atlas/hive/conf/hive-log4j2.properties
 modify property.hive.log.dir = /home/atlas/hive/logs
  1. Log in to mysql and the command is mysql -uroot -p
  2. Exit after creating Hive metabase. Command: create database metastore;
  3. Initialize Hive metabase, command: schematool -initSchema -dbType mysql -verbose
  4. To configure metastore database coding, command:
mysql -uroot -p
use metastore
alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8;
alter table TABLE_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table PARTITION_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table PARTITION_KEYS modify column PKEY_COMMENT varchar(4000) character set utf8;
alter table INDEX_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
alter table TBLS modify column view_expanded_text mediumtext character set utf8;
alter table TBLS modify column view_original_text mediumtext character set utf8;

7, Installing zookeeper

  1. Unzip the installation package, command:
tar -xzvf apache-zookeeper-3.5.7-bin.tar.gz -C /home/atlas/
mv apache-zookeeper-3.5.7-bin/ zookeeper
  1. Create folder zkData, command: mkdir -p /home/atlas/zookeeper/zkData
  2. Write file: myper / zoeam / data, write command / zoeam / zoeam
  3. Rename zoo_sample.cfg file is zoo CFG, command: mv /home/atlas/zookeeper/conf/zoo_sample.cfg /home/atlas/zookeeper/conf/zoo.cfg
  4. Modify zoo Cfg file
#modify
dataDir=/home/atlas/zookeeper/zkData
#Append at the end of text
#######################cluster########################## 
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
  1. Start zookeeper with the command: / home / atlas / zookeeper / bin / zkserver sh start
  2. Stop: / home / atlas / zookeeper / bin / zkserver sh stop
  3. View status: / home / atlas / zookeeper / bin / zkserver sh stauts

8, Install kafka

  1. Unzip the installation package, command:
tar -xzvf kafka_2.11-2.4.1.tgz -C /home/atlas/
mv kafka_2.11-2.4.1/ kafka
  1. Create logs folder, command: mkdir -p /home/atlas/kafka/logs
  2. Modify server Properties file, command: VIM / home / atlas / Kafka / config / server properties
#Delete the topic function enable and add it to the broker After id = 0
delete.topic.enable=true
#Modify the storage path of kafka operation log
log.dirs=/home/atlas/kafka/data
#Modify the configuration connection Zookeeper cluster address 
zookeeper.connect=localhost:2181/kafka
  1. Configure kafka environment variable, VIM / etc / profile d/my_ env. sh
export KAFKA_HOME=/home/atlas/kafka
export PATH=$PATH:$KAFKA_HOME/bin
  1. Start, command: / home / atlas / Kafka / bin / Kafka server start sh -daemon /home/atlas/kafka/config/server. properties
  2. Stop, command: / home / atlas / Kafka / bin / Kafka server stop sh stop

9, Install hbase

  1. Unzip the installation package, command:
tar -xzvf hbase-2.0.5-bin.tar.gz -C /home/atlas/
mv hbase-2.0.5/ hbase
  1. To configure environment variables, command:
vim /etc/profile.d/my_env.sh

export HBASE_HOME=/home/atlas/hbase
export PATH=$PATH:$HBASE_HOME/bin
  1. Modify HBase env SH file, command: VIM / home / atlas / HBase / conf / HBase env sh
#modify
export HBASE_MANAGES_ZK=false #It turned out to be true
  1. Modify HBase site XML file, command: VIM / home / atlas / HBase / conf / HBase site xml
<property> 
	<name>hbase.rootdir</name> 
	<value>hdfs://localhost:9000/HBase</value> 
</property> 
<property> 
	<name>hbase.cluster.distributed</name> 
	<value>true</value> 
</property>
<property>    
	<name>hbase.zookeeper.quorum</name> 
	<value>localhost</value>
</property>
  1. Start, command: / home / atlas / HBase / bin / start HBase sh
  2. Stop: / home / atlas / HBase / bin / stop HBase sh

10, Install solr

  1. Unzip the installation package, command:
tar -xzvf /home/atlas/rar/solr-7.7.3.tgz -C /home/atlas/
mv solr-7.7.3/ solr
  1. Create user, command: useradd solr
  2. Set password, command: echo solr | passwd --stdin solr
  3. Modify the owner of the solr directory to the solr user. The command is chown -R solr:solr /home/atlas/solr
  4. Modify / home / atlas / Solr / bin / Solr in. SH file, command:
vim /home/atlas/solr/bin/solr.in.sh
ZK_HOST="localhost:2181"
  1. Start command: sudo -i -u solr /home/atlas/solr/bin/solr start

11, Install atlas

1. Upload and unzip the compressed package

  1. Unzip apache-atlas-2.1.0-server tar. GZ file, renamed Atlas
tar -xzvf /home/atlas/rar/9_atlas/apache-atlas-2.1.0-server.tar.gz -C /home/atlas/
mv apache-atlas-2.1.0/ atlas

2. Atlas integration Hbase

  1. Modify atlas / conf / atlas application Properties configuration file, command: VIM / home / atlas / atlas / conf / atlas application properties
atlas.graph.storage.hostname=localhost:2181
  1. Modify atlas / conf / atlas env SH configuration file, command: VIM / home / atlas / atlas / conf / atlas env sh
#Append at the end of the file
export HBASE_CONF_DIR=/home/atlas/hbase/conf

3. Atlas integration Solr

  1. Modify atlas / conf / atlas application Properties configuration file, command: VIM / home / atlas / atlas / conf / atlas application properties
#Solr, comment out here
#Solr cloud mode properties
#atlas.graph.index.search.solr.mode=cloud
#atlas.graph.index.search.solr.zookeeper-url=
#atlas.graph.index.search.solr.zookeeper-connect-timeout=60000
#atlas.graph.index.search.solr.zookeeper-session-timeout=60000
#atlas.graph.index.search.solr.wait-searcher=true

#Solr http mode properties
atlas.graph.index.search.solr.mode=http
atlas.graph.index.search.solr.http-urls=http://localhost:8983/solr
  1. Copy files, command: cp -rf /home/atlas/atlas/conf/solr /home/atlas/solr/atlas_conf
  2. Execute the following command
sudo -i -u solr /home/atlas/solr/bin/solr create -c vertex_index -d /home/atlas/solr/atlas_conf

4. Atlas integration Kafka

  1. Modify atlas / conf / atlas application Properties configuration file, command: VIM / home / atlas / atlas / conf / atlas application properties
atlas.notification.embedded=false 
atlas.kafka.data=/home/atlas/kafka/data 
atlas.kafka.zookeeper.connect=localhost:2181/kafka 
atlas.kafka.bootstrap.servers=localhost:9092

5. Atlas Server configuration

  1. Modify atlas / conf / atlas application Properties configuration file, command: VIM / home / atlas / atlas / conf / atlas application properties
atlas.server.run.setup.on.start=false
  1. Modify atlas-log4j XML file, command: VIM / home / atlas / atlas / conf / atlas-log4j xml
#Remove the comments from the following code
<appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender">
    <param name="file" value="${atlas.log.dir}/atlas_perf.log" />
    <param name="datePattern" value="'.'yyyy-MM-dd" />
    <param name="append" value="true" />
    <layout class="org.apache.log4j.PatternLayout">
        <param name="ConversionPattern" value="%d|%t|%m%n" />
    </layout>
</appender>

<logger name="org.apache.atlas.perf" additivity="false">
    <level value="debug" />
    <appender-ref ref="perf_appender" />
</logger>

6. Atlas integration Hive

  1. Modify atlas / conf / atlas application Properties configuration file, command: VIM / home / atlas / atlas / conf / atlas application properties
#Append at the end of the file
######### Hive Hook Configs ####### 
atlas.hook.hive.synchronous=false 
atlas.hook.hive.numRetries=3 
atlas.hook.hive.queueSize=10000 
atlas.cluster.name=primary
  1. Modify hive site XML file, command: VIM / home / atlas / hive / conf / hive site xml
#Append in the configuration tag
<property> 
      <name>hive.exec.post.hooks</name> 
      <value>org.apache.atlas.hive.hook.HiveHook</value> 
</property> 

7. Install Hive Hook

  1. Unzip Hive Hook, command: tar -zxvf apache-atlas-2.1.0-hive-hook tar. gz
  2. Copy the file dependencies in the Hive Hook directory to the atlas installation path, command: cp -r apache-atlas-hive-hook-2.1.0/* /home/atlas/atlas/
  3. Modify HIV / conf / HIV Env SH configuration file, command: VIM / home / atlas / hive / conf / hive env sh
export HIVE_AUX_JARS_PATH=/home/atlas/atlas/hook/hive
  1. Add the Atlas configuration file / home / Atlas / Atlas / conf / Atlas application Copy properties to / home / Atlas / hive / conf directory, command: CP / home / Atlas / Atlas / conf / Atlas application properties /home/atlas/hive/conf/

12, Atlas launch

1. Start pre configuration

  1. Start Hadoop, command: start all sh
  2. Start Zookeeper with the command: / home / atlas / Zookeeper / bin / zkserver sh start
  3. Start Kafka, command: / home / atlas / Kafka / bin / Kafka server start sh -daemon /home/atlas/kafka/config/server. properties
  4. Start Hbase, command: / home / atlas / Hbase / bin / start Hbase sh
  5. Start Solr, command: sudo -i -u solr /home/atlas/solr/bin/solr start

3. Start Atlas service

  1. Enter the bin directory of atlas, and the command is cd /home/atlas/atlas/bin
  2. Execute startup script, command:/ atlas_start.py, wait 2min
  3. Access port 21000 of Hadoop 01
  4. Log in with the default account, user name: admin, password: admin

Topics: Linux Hadoop ssh