热门IT资讯网

Hive Remote模式搭建

发表于:2024-11-23 作者:热门IT资讯网编辑
编辑最后更新 2024年11月23日,一、实验环境1.软件版本:apache-hive-2.3.0-bin.tar.gz、mysql-community-server-5.7.192.mysql JDBC驱动包:mysql-connect

一、实验环境

1.软件版本:apache-hive-2.3.0-bin.tar.gz、mysql-community-server-5.7.19

2.mysql JDBC驱动包:mysql-connector-java-5.1.44.tar.gz

3.mysql已经安装在hadoop5上

4..主机规划

hadoop3

Remote:client
hadoop5Remote:server;mysql

二、基础配置

1.解压并移动hive

[root@hadoop5 ~]# tar -zxf apache-hive-2.3.0-bin.tar.gz[root@hadoop5 ~]# cp -r apache-hive-2.3.0-bin /usr/local/hive

2.修改环境变量

[root@hadoop5 ~]# vim /etc/profileexport HIVE_HOME=/usr/local/hiveexport PATH=$HIVE_HOME/bin:$PATH[root@hadoop5 ~]# source /etc/profile

3.复制初始文件

[root@hadoop5 ~]# cd /usr/local/hive/conf/[root@hadoop5 conf]# cp hive-env.sh.template hive-env.sh  [root@hadoop5 conf]# cp hive-default.xml.template hive-site.xml  [root@hadoop5 conf]# cp hive-log4j2.properties.template hive-log4j2.properties  [root@hadoop5 conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

4.修改hive-env.sh文件

[root@hadoop5 conf]# vim hive-env.sh    #在最后添加export JAVA_HOME=/usr/local/jdkexport HADOOP_HOME=/usr/local/hadoopexport HIVE_HOME=/usr/local/hive    export HIVE_CONF_DIR=/usr/local/hive/conf

5.拷贝mysql的JDBC驱动包

[root@hadoop5 ~]# tar -zxf mysql-connector-java-5.1.44.tar.gz [root@hadoop5 ~]# cp mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar /usr/local/hive/lib/

6.在hdfs中创建一下目录,并授权,用于存储文件

hdfs dfs -mkdir -p /user/hive/warehousehdfs dfs -mkdir -p /user/hive/tmphdfs dfs -mkdir -p /user/hive/loghdfs dfs -chmod -R 777 /user/hive/warehousehdfs dfs -chmod -R 777 /user/hive/tmphdfs dfs -chmod -R 777 /user/hive/log

7.在mysql中创建相关用户和库

mysql> create database metastore;Query OK, 1 row affected (0.03 sec)mysql> set global validate_password_policy=0;Query OK, 0 rows affected (0.26 sec)mysql> grant all on metastore.* to hive@'%' identified by 'hive123456';Query OK, 0 rows affected, 1 warning (0.03 sec)mysql> flush privileges;Query OK, 0 rows affected (0.00 sec)

7.使用scp将hive拷贝到hadoop3上

[root@hadoop5 ~]# scp -r /usr/local/hive root@hadoop3:/usr/local/

三、修改配置文件

1.服务端hive-site.xml的配置

    hive.exec.scratchdir    /user/hive/tmp    hive.metastore.warehouse.dir    /user/hive/warehouse    hive.querylog.location    /user/hive/log    javax.jdo.option.ConnectionURL    jdbc:mysql://hadoop5:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false    javax.jdo.option.ConnectionDriverName    com.mysql.jdbc.Driver    javax.jdo.option.ConnectionUserName    hive    javax.jdo.option.ConnectionPassword    hive123456

2.客户端hive-site.xml的配置

    hive.metastore.uris    thrift://hadoop5:9083    hive.exec.scratchdir    /user/hive/tmp    hive.metastore.warehouse.dir    /user/hive/warehouse    hive.querylog.location    /user/hive/log    hive.metastore.local    false

四、启动hive(两种方式)

首先格式化数据库

schematool --dbType mysql --initSchema

1.直接启动

service:

[root@hadoop5 ~]# hive --service metastore

client:

[root@hadoop3 ~]# hivehive> show databases;OKdefaultTime taken: 1.599 seconds, Fetched: 1 row(s)hive> quit;

2.beeline方式

需要先在hadoop的core-site.xml中添加配置

    hadoop.proxyuser.root.groups    *        hadoop.proxyuser.root.hosts    *  

service:

[root@hadoop5 ~]# nohup hiveserver2 &[root@hadoop5 ~]# netstat -nptl | grep 10000tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      3464/java

client:

[root@hadoop3 ~]# beeline Beeline version 1.2.1.spark2 by Apache Hivebeeline>
beeline> !connect jdbc:hive2://hadoop5:10000 hive hive123456Connecting to jdbc:hive2://hadoop5:1000017/09/21 09:47:31 INFO jdbc.Utils: Supplied authorities: hadoop5:1000017/09/21 09:47:31 INFO jdbc.Utils: Resolved authority: hadoop5:1000017/09/21 09:47:31 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hadoop5:10000Connected to: Apache Hive (version 2.3.0)Driver: Hive JDBC (version 1.2.1.spark2)Transaction isolation: TRANSACTION_REPEATABLE_READ0: jdbc:hive2://hadoop5:10000> show databases;+----------------+--+| database_name  |+----------------+--+| default        |+----------------+--+1 row selected (2.258 seconds)


0