hive启动及常用命令
标题参考网址
·
标题
service mysql start
start-all.sh
jps
hive
[hadoop@hadoop000 ~]$ mysql -uroot -p
Enter password:
mysql> show databases;
hive常用命令
hive> create database hive1;
hive> create database if not exists hive2
> comment 'this is test database'
> with dbproperties('creator'='test','date'='2020-08-27');
use hive1;
hive> create table helloworld(id int,name string);
hive> desc helloworld;
hive> show tables;
下面删表报错 是mysql版本太低问题
#从官网上下了个 mysql-connector-java-5.1.40-bin.jar
drop table helloworld;
执行上面删表后 上面的命令都坏掉了 退出 在进来命令就都好了
hive> exit;
hive> use hive2;
hive> create table hello2(id int,name string)
> row format delimited fields terminated by '\t';
# '\t'是根据 tab分
[hadoop@hadoop000 ~]$ cd ~/app/tmp/
[hadoop@hadoop000 tmp]$ mkdir data
[hadoop@hadoop000 tmp]$ cd ./data/
[hadoop@hadoop000 data]$ vim hello.txt
1 zhangsan
2 lisi
3 wangwu
[hadoop@hadoop000 data]$ cat hello.txt
1 zhangsan
2 lisi
3 wangwu
[hadoop@hadoop000 data]$ pwd
/home/hadoop/app/tmp/data
# 下面注意不写overwrite 他会一直增加
# overwrite表示覆盖相同的数据 如果不要overwrite 你把下面命令在执行一次 没执行一次就会生成一个个
#hello_copy_1.txt hello_copy_2.txt 以此类推
hive> load data local inpath '/home/hadoop/app/tmp/data/hello.txt'
> overwrite into table hello2;
hive> select * from hello2;
[hadoop@hadoop000 data]$ hadoop dfs -ls /user/hive/warehouse/hive2.db/hello3
[hadoop@hadoop000 data]$ hadoop dfs -text /user/hive/warehouse/hive2.db/hello3/hello.txt
(1)查看数据库
hive> show databases;
OK
default
Time taken: 0.038 seconds, Fetched: 3 row(s) hive>
(2)查看表
hive> show tables; OK
helloworld
(3)创建Hive数据库
hive> create database hive1;
OK
Time taken: 0.082 seconds
hive> create database if not exists hive2
> comment 'this is test database'
> with dbproperties('creator'='test','date'='2020-08-27');
OK
Time taken: 0.375 seconds
(4)切换数据库
hive> use hive2;
OK
Time taken: 0.022 seconds hive>
(5)创建表
hive> create table helloworld(id int,name string);
OK
Time taken: 0.731 seconds
(6)查看表结构
hive> desc helloworld;
OK
id int name string
Time taken: 0.231 seconds, Fetched: 2 row(s) hive>
(7)导入本地数据
[hadoop@hadoop000 data]$ cat hello.txt
1 zhangsan
2 lisi
3 wangwu
hive> load data local inpath '/home/hadoop/app/tmp/data/hello.txt'
> overwrite into table hello;
Loading data to table hive2.hello
[Warning] could not update stats.
OK
Time taken: 25.634 seconds hive> select * from hello;
OK
1 zhangsan
2 lisi
3 wangwu
Time taken: 4.396 seconds, Fetched: 3 row(s) hive>
(8)Hive数据库文件在HDFS上的存储位置
[hadoop@hadoop000 data]$ hadoop dfs -ls /user/hive/warehouse/hive2.db/hello
-rwxr-xr-x 1 hadoop supergroup 27 2020-08-27 14:50 /user/hive/warehouse/hive2.db/hello/hello.txt
-rwxr-xr-x 1 hadoop supergroup 27 2020-08-27 15:04 /user/hive/warehouse/hive2.db/hello/hello_copy_1.txt
[hadoop@hadoop000 data]$ hadoop dfs -text
/user/hive/warehouse/hive2.db/hello/hello.txt
1 zhangsan
2 lisi
3 wangwu
[hadoop@hadoop000 data]$
(9)显示表所在的数据库(当前有效)
hive> set hive.cli.print.current.db=true;
hive (hive2)> exit;
全局配置有效:
在hive-site.xml中增加一下配置信息
[hadoop@hadoop000 data]$ cd ~/app/hive-1.1.0-cdh5.7.0/conf/
[hadoop@hadoop000 conf]$ vim hive-site.xml
[hadoop@hadoop000 conf]$
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
重启hive查看默认数据库的提示信息:
[hadoop@hadoop000 conf]$ hive
hive (default)> use hive2;
OK
Time taken: 1.144 seconds
hive (hive2)>
更多推荐
所有评论(0)