1、因为我使用MySQL做为Hive的元数据库,所以先安装MySQL。
参考:http://www.cnblogs.com/hunttown/p/5452205.html
登录命令:mysql -h主机地址 -u用户名 -p用户密码
mysql –u root#初始登录没有密码
修改密码
格式:mysqladmin -u用户名 -p旧密码 password 新密码
mysql>mysqladmin -uroot –password 123456
注:因为开始时root没有密码,所以-p旧密码一项就可以省略了。
创建用于hive的用户hadoopuser
创建用户命令:CREATE USER username@"host" IDENTIFIED BY 'password';
mysql> CREATE USER hadoopuser@"192.168.254.151" IDENTIFIED BY '123456';
授权命令:GRANT privileges ON databasename.tablename TO 'username'@'host'
mysql> GRANT ALL PRIVILEGES ON *.* TO hadoopuser@"192.168.254.151" WITH GRANT OPTION;
创建用户和授权可以一起做:
mysql> GRANT ALL PRIVILEGES ON *.* TO hadoopuser@"192.168.254.151" IDENTIFIED BY '123456' WITH GRANT OPTION;
创建数据库hive,用于hive数据存储
mysql> create database hive
2、解压hive到 /home/hadoopuser/
3、切换到root用户,加入环境变量:
export HIVE_HOME=/home/hadoopuser/hive
export PATH=$PATH:$HIVE_HOME/bin
4、使用root用户,给hive/bin增加权限
chmod 777 /hive/bin/*
5、配置文件
切换到 /hive/conf
cp hive-default.xml.template hive-site.xml
cp hive-log4j.properties.template hive-log4j.properties#Hive-2.1.0没有此配置项
(1)配置hive-site.xml
javax.jdo.option.ConnectionURLjdbc:mysql://192.168.254.156:3306/hive?createDatabaseIfNotExist=trueJDBC connect string for a JDBC metastore
javax.jdo.option.ConnectionDriverNamecom.mysql.jdbc.DriverDriver class name for a JDBC metastore
javax.jdo.option.ConnectionUserNamehadoopusername to use against metastore database
javax.jdo.option.ConnectionPassword123456password to use against metastore database
hive.metastore.warehouse.dir/user/hive/warehouselocation of default database for the warehouse
如果使用derby元数据库,则JDBC要配置成:
javax.jdo.option.ConnectionURLjdbc:derby:/opt/hive/metastore_db;create=trueJDBC connect string for a JDBC metastore
注1:仓库目录如果没有要创建
hdfs dfs –mkdir /user/hive
hdfs dfs –mkdir -p/user/hive/warehouse
注2:mysql的驱动jar包要上传到hive/lib下
(2)配置hive-log4j.properties
#log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounterlog4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
6、在HDFS中创建/tmp和/user/hive/warehouse并设置权限
hadoop fs -mkdir /tmp
hadoop fs-mkdir -p /user/hive/warehouse
hadoop fs-chmod g+w /tmp
hadoop fs-chmod g+w /user/hive/warehouse
注:hadoop 命令换成了hdfs命令,上面的命令如同下面的命令
hdfs dfs -mkdir /tmp
hdfs dfs-mkdir -p /user/hive/warehouse
hdfs dfs-chmod g+w /tmp
hdfs dfs-chmod g+w /user/hive/warehouse
7、手动上传mysql的jdbc库到hive/lib目录。
http://mirror.bit.edu.cn/mysql/Downloads/Connector-J/
mysql-connector-java-5.1.22-bin.jar
8、初始化 如果使用derby元数据库,那么需要进行初始化:
[hadoopuser@Hadoop-NN-01 ~]#schematool -initSchema -dbType derby#执行成功信息Starting metastore schema initialization to 2.0.0Initialization script hive-schema-2.0.0.derby.sql
Initialization script completed
schemaTool completed
如果运行时出现以下错误,说明上面的步骤没有执行,请执行:
Exceptioninthread"main"java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema.Ifneeded, don't forget to include the option to auto-create the underlying databaseinyour JDBC connection string (e.g. ?createDatabaseIfNotExist=trueformysql)
如果使用schematool初始化数据库时出现以下错误:
Initialization script hive-schema-2.1.0.derby.sql
Error:FUNCTION'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!*** schemaTool failed ***
说明数据库文件夹中已经存在一些文件,解决方法就是清空数据库文件夹(也就是前面配置的/opt/hive/metastore_db文件夹)
9、启动hive
hive --service metastore &#启动metastore服务hive --service hiveserver2 &#启动hiveserver服务hive shell#启动hive客户端
Hive使用
1、创建数据库
CREATE DATABASE myhive;
2、创建表
CREATE TABLE doc_hive (id int, username string, sex int, age int, email string, createtime string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
3、导入数据
LOAD DATA LOCAL INPATH '/home/hadoopuser/doc/t-1.txt' OVERWRITE INTO TABLE doc_hive ;
验证:select * from myhive.doc_hive;
Hive的具体使用,在接下来的博客中会有体现。