如何避免Hive中的交叉连接?

6ie5vjzr  于 2021-06-27  发布在  Hive
关注(0)|答案(1)|浏览(345)

我有两张table。一个包含100万条记录,另一个包含2000万条记录。

table 1
    value
    (1, 1)
    (2, 2)
    (3, 3)
    (4, 4)
    (5, 4)
    ....

    table 2
    value
    (55, 11)
    (33, 22)
    (44, 66)
    (22, 11)
    (11, 33)
    ....

我需要使用表1中的值乘以表2中的值,得到结果的排名,然后得到排名前5的结果。结果如下:

value from table 1, top 5 for each value in table 1
    (1, 1), 1*44 + 1*66 = 110
    (1, 1), 1*55 + 1*11 = 66
    (1, 1), 1*33 + 1*22 = 55
    (1, 1), 1*11 + 1*33 = 44
    (1, 1), 1*22 + 1* 11 = 33
    .....

我试着在Hive中使用交叉连接。但是我总是因为table太大而失败。

vm0i2vca

vm0i2vca1#

首先从表2中选择Top5,然后与第一个表进行交叉连接。这将与交叉联接两个表和交叉联接后取top5相同,但在第一种情况下联接的行数将少得多。小5行数据集的交叉连接将转换为Map连接,并以表1完全扫描的速度执行。
请看下面的演示。交叉连接转换为Map连接。注意 "Map Join Operator" 在计划和本警告中: "Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product" :

hive> set hive.cbo.enable=true;
hive> set hive.compute.query.using.stats=true;
hive> set hive.execution.engine=tez;
hive> set hive.auto.convert.join.noconditionaltask=false;
hive> set hive.auto.convert.join=true;
hive> set hive.vectorized.execution.enabled=true;
hive> set hive.vectorized.execution.reduce.enabled=true;
hive> set hive.vectorized.execution.mapjoin.native.enabled=true;
hive> set hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled=true;
hive>
    > explain
    > with table1 as (
    > select stack(5,1,2,3,4,5) as id
    > ),
    > table2 as
    > (select t2.id
    >    from (select t2.id, dense_rank() over(order by id desc) rnk
    >            from (select stack(11,55,33,44,22,11,1,2,3,4,5,6) as id) t2
    >         )t2
    >   where t2.rnk<6
    > )
    > select t1.id, t1.id*t2.id
    >   from table1 t1
    >        cross join table2 t2;
Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product
OK
Plan not optimized by CBO.

Vertex dependency in root stage
Map 1 <- Reducer 3 (BROADCAST_EDGE)
Reducer 3 <- Map 2 (SIMPLE_EDGE)

Stage-0
   Fetch Operator
      limit:-1
      Stage-1
         Map 1
         File Output Operator [FS_17]
            compressed:false
            Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
            table:{"serde:":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe","input format:":"org.apache.hadoop.mapred.TextInputFormat","output format:":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"}
            Select Operator [SEL_16]
               outputColumnNames:["_col0","_col1"]
               Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
               Map Join Operator [MAPJOIN_19]
               |  condition map:[{"":"Inner Join 0 to 1"}]
               |  HybridGraceHashJoin:true
               |  keys:{}
               |  outputColumnNames:["_col0","_col1"]
               |  Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
               |<-Reducer 3 [BROADCAST_EDGE]
               |  Reduce Output Operator [RS_14]
               |     sort order:
               |     Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
               |     value expressions:_col0 (type: int)
               |     Select Operator [SEL_9]
               |        outputColumnNames:["_col0"]
               |        Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
               |        Filter Operator [FIL_18]
               |           predicate:(dense_rank_window_0 < 6) (type: boolean)
               |           Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
               |           PTF Operator [PTF_8]
               |              Function definitions:[{"Input definition":{"type:":"WINDOWING"}},{"partition by:":"0","name:":"windowingtablefunction","order by:":"_col0(DESC)"}]
               |              Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
               |              Select Operator [SEL_7]
               |              |  outputColumnNames:["_col0"]
               |              |  Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
               |              |<-Map 2 [SIMPLE_EDGE]
               |                 Reduce Output Operator [RS_6]
               |                    key expressions:0 (type: int), col0 (type: int)
               |                    Map-reduce partition columns:0 (type: int)
               |                    sort order:+-
               |                    Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
               |                    UDTF Operator [UDTF_5]
               |                       function name:stack
               |                       Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
               |                       Select Operator [SEL_4]
               |                          outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11"]
               |                          Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
               |                          TableScan [TS_3]
               |                             alias:_dummy_table
               |                             Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE
               |<-UDTF Operator [UDTF_2]
                     function name:stack
                     Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
                     Select Operator [SEL_1]
                        outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5"]
                        Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
                        TableScan [TS_0]
                           alias:_dummy_table
                           Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE

Time taken: 0.199 seconds, Fetched: 66 row(s)

用你的table替换我演示中的堆栈。

相关问题