Hive 基本操作语句汇总

一、说明

Hive建表语句DML,可查看官网说明文档:LanguageManual DDL

file

二、创建表

语法:

第一种建表的形式:

说明: 
temporary 临时表,在当前会话内,这张表有效,当会话结束,可以理解为程序结束,则程序终止。
external 外部表, hdfs 上的表的文件,并非存储在默认的路径上的时候, 
    EXTERNAL 表格和正常表格删除区别,external 只删除metastore
    可以称为外部表,便于和其他数据库和程序交互,比如impala 等。
如果不加 IF NOT EXISTS 的时候,如果表存在,会报错,可以加上IF NOT EXISTS 加以避免。
注意表名不区分大小写
例子:
create temporary table my.table1;
create external table my.table2;
create tabel if not exists my.table3;
-- (Note: TEMPORARY available in Hive 0.14.0 and later)
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name    
   --定义列, 比如 id  Int comment '索引', name string comment '名字'
  [(col_name data_type [COMMENT col_comment], ... [constraint_specification])]   
  [COMMENT table_comment]  -- comment 表示表的注释    
  --分区,括号内的定义类似列的定义,分区可以根据默写字段比如日期,城市,进行分区,可以加快某些条件下的查询
  --部分列的集合,根据分区列的进行粗粒度的划分,一个分区,代表着一个目录
  [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]  
  --分桶,在分区的基础上,可以进行分桶,分桶的原理是,根据某几列进行计算hash 值,
  --然后hash 值对分成的桶的个数取余操作,决定放在哪个桶里面
  --在数据量足够大的情况下,分桶比分区,更高的查询效率 
  --分桶,还可以使抽样更加高效
  [CLUSTERED BY (col_name, col_name, ...) 
            [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]  ---- 分桶
  ---大致上Skewed,对数据倾斜处理有很大帮助,没用过 
  [SKEWED BY (col_name, col_name, ...)                  -- (Note: Available in Hive 0.10.0 and later)]
     ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
     [STORED AS DIRECTORIES]
  [
   [ROW FORMAT row_format] 
   [STORED AS file_format]
     | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]  -- (Note: Available in Hive 0.6.0 and later)
  ]   -- 表示文件的存储格式, 其中store by 指的是自定义文件格式,用得不多,笔者没有用过。
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]    --  表示表格的附加属性和表述。 
                                                         -- (Note: Available in Hive 0.6.0 and later)
  [AS select_statement];  
   -- 建立表格的时候同时从其他表格select 数据进行填充表格。
   -- (Note: as  select_statement Available in Hive 0.5.0 and later; not supported for external tables)

CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
  LIKE existing_table_or_view_name
  [LOCATION hdfs_path];

 说明:
 数据类型
data_type
  : primitive_type
  | array_type
  | map_type
  | struct_type
  | union_type  -- (Note: Available in Hive 0.7.0 and later)

基本数据类型
primitive_type
  : TINYINT
  | SMALLINT
  | INT
  | BIGINT
  | BOOLEAN
  | FLOAT
  | DOUBLE
  | DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)
  | STRING
  | BINARY      -- (Note: Available in Hive 0.8.0 and later)
  | TIMESTAMP   -- (Note: Available in Hive 0.8.0 and later)
  | DECIMAL     -- (Note: Available in Hive 0.11.0 and later)
  | DECIMAL(precision, scale)  -- (Note: Available in Hive 0.13.0 and later)
  | DATE        -- (Note: Available in Hive 0.12.0 and later)
  | VARCHAR     -- (Note: Available in Hive 0.12.0 and later)
  | CHAR        -- (Note: Available in Hive 0.13.0 and later)

 复杂数据类型
array_type
  : ARRAY < data_type >

map_type
  : MAP < primitive_type, data_type >

struct_type
  : STRUCT < col_name : data_type [COMMENT col_comment], ...>

union_type
   : UNIONTYPE < data_type, data_type, ... >  -- (Note: Available in Hive 0.7.0 and later)

## 在hdfs 上的文件存储格式
row_format
  : DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
        [MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
        [NULL DEFINED AS char]   -- (Note: Available in Hive 0.13 and later)
  | SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]

file_format:
  : SEQUENCEFILE
  | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
  | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
  | ORC         -- (Note: Available in Hive 0.11.0 and later)
  | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
  | AVRO        -- (Note: Available in Hive 0.14.0 and later)
  | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

constraint_specification:
  : [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
    [, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE 

1)创建外部表

CREATE EXTERNAL TABLE ad_rule_result(
  source string COMMENT '',
 entity_type int COMMENT '')
 PARTITIONED BY(day string,hour string)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY ','
 LOCATION 'hdfs://mycluster/user/vc/public/ad_rule_result';

2)创建内部表:

CREATE TABLE dw.zjmj_only_refund_money_risk_result_df(
 buyer_id string COMMENT '买家ID',
 refund_order_count int COMMENT '仅退款订单数',
 order_count int COMMENT '有效订单数',
 refund_order_rate float COMMENT '仅退款订单占比',
 solar_name string COMMENT '处罚名',
 day string COMMENT ''
 ) COMMENT '仅退款规则命中的买家'
 LOCATION 'hdfs://user/hive/warehouse/dw.db/zjmj_only_refund_money_risk_result_df'

3)从老表中复制一个表结构出来

创建一个新表,结构与其他一样

hive> create table new_table like records;
create table if not exists temp.ad_rule_result like vc.ad_rule_result  

4)将查询结果存成表

直接将select的结果存成表:create table XX as select
分为两种情况:

①、INSERT OVERWRITE TABLE ..SELECT :新表预先存在

hive> 
> INSERT OVERWRITE TABLE stations_by_year SELECT year, COUNT(DISTINCT station) GROUP BY year
> INSERT OVERWRITE TABLE records_by_year SELECT year, COUNT(1) GROUP BY year
> INSERT OVERWRITE TABLE good_records_by_year SELECT year, COUNT(1) WHERE temperature != 9999 AND (quality = 0 OR quality = 1 OR quality = 4 OR quality = 5 OR quality = 9) GROUP BY year;

②、CREATE TABLE ... AS SELECT :新表表预先不存在

hive> CREATE TABLE target AS SELECT col1,col2 FROM source;
hive> create table temp.ad_rule_result_2  as SELECT * FROM vc.ad_rule_result

5)创建分区表

创建分区表:

hive> create table logs(ts bigint,line string) partitioned by (dt String,country String);

加载分区表数据:

hive> load data local inpath '/home/hadoop/input/hive/partitions/file1' into table logs partition (dt='2001-01-01',country='GB');

展示表中有多少分区:

hive> show partitions logs;

5)创建临时表

file

例如 :

create temporary table tmp as select * from test.test001 ;

注意:
创建的临时表仅仅在当前会话是可见的,数据将会被存储在用户的暂存目录中,并在会话结束时被删除。如

果创建临时表的名字与当前数据库下的一个非临时表相同,则在这个会话中使用这个表名字时将会使用的临时表,而不是非临时表,用户在这个会话内将不能使用原表,除非删除或者重命名临时表。

临时表有如下限制:

1)不支持分区字段
2)不支持创建索引

在Hive1.1.0之后临时表可以存储到memory,ssd或者default中,可以通过配置 hive.exec.temporary.table.storage来实现。
一般使用CREATE TEMPORARY TABLE ….来创建临时表。

三、创建视图

hive> CREATE VIEW valid_records AS SELECT * FROM records2 WHERE temperature !=9999;

查看视图详细信息:

hive> DESCRIBE EXTENDED valid_records;

四、查询表

展示所有表:

hive> SHOW TABLES;
        lists all the tables
hive> SHOW TABLES '.*s';

lists all the table that end with 's'. The pattern matching follows Java regular
expressions. Check out this link for documentationhttp://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html

显示表的结构信息

hive> DESCRIBE invites;
        shows the list of columns

内连接:

hive> SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

查看hive为某个查询使用多少个MapReduce作业

hive> Explain SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

外连接:

hive> SELECT sales.*, things.* FROM sales LEFT OUTER JOIN things ON (sales.id = things.id);
hive> SELECT sales.*, things.* FROM sales RIGHT OUTER JOIN things ON (sales.id = things.id);
hive> SELECT sales.*, things.* FROM sales FULL OUTER JOIN things ON (sales.id = things.id);

in查询:Hive不支持,但可以使用LEFT SEMI JOIN

hive> SELECT * FROM things LEFT SEMI JOIN sales ON (sales.id = things.id);

Map连接:Hive可以把较小的表放入每个Mapper的内存来执行连接操作

hive> SELECT /*+ MAPJOIN(things) */ sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

五、更新、删除表

更新表的名称:

hive> ALTER TABLE source RENAME TO target;

添加新一列

hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');

删除表:

hive> DROP TABLE records;

删除表中数据,但要保持表的结构定义

hive> dfs -rmr /user/hive/warehouse/records;

从本地文件加载数据:

hive> LOAD DATA LOCAL INPATH '/home/hadoop/input/ncdc/micro-tab/sample.txt' OVERWRITE INTO TABLE records;

显示所有函数:

hive> show functions;

查看函数用法:

hive> describe function substr;

查看数组、map、结构

hive> select col1[0],col2['b'],col3.c from complex;

四、实战

1、创建分区表


CREATE TABLE IF NOT EXISTS hive_test_db.stg_hive_es_test (
id  BIGINT COMMENT '主键id',
road_id  STRING COMMENT '路线id',
road_name  STRING COMMENT '路线name',
road_dir_no  BIGINT COMMENT '行驶方向 1:北京方向,2:雄安方向',
flow  double COMMENT '车流量保留2位小数',
time_new STRING COMMENT '时间') COMMENT '路况类指标-平均速度-离线-按天更新' 
PARTITIONED BY (
dt  STRING COMMENT 'yyyy-MM-dd'
) 
ROW FORMAT
SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim' = '\u0001',
'serialization.format' = '\u0001'
) 
STORED AS 
INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';

2、向分区表中插入数据

insert into table  hive_test_db.stg_hive_es_test partition(dt='2022-03-31') values
(1,'10001','京雄test',2,65.21,'2022-03-31 00:09:08'),
(2,'10002','京雄test',2,65.21,'2022-03-31 00:09:09'),
(3,'10003','京雄test',2,65.21,'2022-03-31 10:09:09'),
(4,'10004','京雄test',2,65.21,'2022-03-31 02:09:09'),
(5,'10005','京雄test',2,65.21,'2022-03-31 10:09:09'),
(6,'10006','京雄test',2,65.21,'2022-03-31 00:09:09'),
(7,'10007','京雄test',2,65.21,'2022-03-31 23:09:09'),
(8,'10008','京雄test',2,65.21,'2022-03-31 11:09:09'),
(9,'10009','京雄test',2,65.21,'2022-03-31 12:09:09'),
(10,'10010','京雄test',2,65.21,'2022-03-31 13:09:09');

3、查询数据

select * from hive_test_db.stg_hive_es_test where dt = '2022-03-31'

file

4、创建一个新表,结构与其他一样

create table hive_test_db.new_stg_hive_es_test like hive_test_db.stg_hive_es_test;

file

报错了

5、创建一个新表,结构与其他一样

create table hive_test_db.copy_stg_hive_es_test   as SELECT * FROM hive_test_db.stg_hive_es_test;

6、新表预先存在

 INSERT OVERWRITE TABLE stations_by_year SELECT year, COUNT(DISTINCT station) GROUP BY year

相关文章:
hive 学习系列三(表格的创建create-table)

为者常成,行者常至