将配置单元分区的、带块的表与配置单元中仅带块的表(不带块的表)连接起来

qlvxas9a  于 2021-06-24  发布在  Hive
关注(0)|答案(1)|浏览(264)

我有两张table:
q6\u cms\u list\u key1(用cm和se扣起来)用tru dt分区。。。99 000 000行q6\ cm\首次就诊(由cm和se扣住)25 000 000行
使用以下条件制作另一张表

insert into table q6_cm_first_visit1 PARTITION (trans) 
     select distinct 
            b.se10, b.dealer_id, b.terminal_id, b.se, 
            b.comp_start_n, b.comp_end_n, b.latest_date,
            b.cm,a.first_visit_date,b.trans 
       from q6_cm_first_visit a 
            inner join q6_cms_list_key1 b 
            on b.trans BETWEEN DATE_SUB('${hiveconf:run.date}', 180) AND '${hiveconf:run.date}' 
               and a.cm = b.cm and a.se = b.se;

查询需要很多时间。如何减少时间,无分区/有块表是否与分区/有块表连接?

formatted plan is below 

1   STAGE DEPENDENCIES:
2   Stage-1 is a root stage
3   Stage-2 depends on stages: Stage-1
4   Stage-3 depends on stages: Stage-2
5   Stage-0 depends on stages: Stage-3
6   Stage-4 depends on stages: Stage-0
7   STAGE PLANS:
8   Stage: Stage-1
9   Map Reduce
10  Map Operator Tree:
11  TableScan
12  alias: a
13  Statistics: Num rows: 25638813900 Data size: 10691743838084 Basic stats: COMPLETE Column stats: NONE
14  Filter Operator
15  predicate: (cm is not null and se is not null) (type: boolean)
16  Statistics: Num rows: 25638813900 Data size: 10691743838084 Basic stats: COMPLETE Column stats: NONE
17  Reduce Output Operator
18  key expressions: cm (type: string), eff_se (type: bigint)
19  sort order: ++
20  Map-reduce partition columns: cm (type: string), se (type: bigint)
21  Statistics: Num rows: 25638813900 Data size: 10691743838084 Basic stats: COMPLETE Column stats: NONE
22  value expressions: first_visit_date (type: string)
23  TableScan
24  alias: b
25  Statistics: Num rows: 91896551441 Data size: 52063175338060 Basic stats: COMPLETE Column stats: NONE
26  Filter Operator
27  predicate: (cm is not null and se is not null and trans_dt BETWEEN null AND '${hiveconf:run.date}') (type: boolean)
28  Statistics: Num rows: 1 Data size: 566 Basic stats: COMPLETE Column stats: NONE
29  Reduce Output Operator
30  key expressions: cm (type: string), se (type: bigint)
31  sort order: ++
32  Map-reduce partition columns: cm (type: string), se (type: bigint)
33  Statistics: Num rows: 1 Data size: 566 Basic stats: COMPLETE Column stats: NONE
34  value expressions: se10 (type: string), dealer_id (type: string), terminal_id (type: string), comp_start_n (type: string), comp_end_n (type: string), latest_date (type: date), trans_dt (type: string)
35  Reduce Operator Tree:
36  Join Operator
37  condition map:
38  Inner Join 0 to 1
39  keys:
40  0 cm (type: string), se (type: bigint)
41  1 cm (type: string), se (type: bigint)
42  outputColumnNames: _col5, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17
43  Statistics: Num rows: 28202695901 Data size: 11760918476803 Basic stats: COMPLETE Column stats: NONE
44  Select Operator
45  expressions: _col5 (type: string), _col9 (type: string), _col10 (type: string), _col11 (type: string), _col12 (type: bigint), _col13 (type: string), _col14 (type: string), _col15 (type: date), _col16 (type: string), _col17 (type: string), _col9 (type: string), _col10 (type: string), _col11 (type: string), _col12 (type: bigint), _col13 (type: string), _col14 (type: string), _col15 (type: date), _col16 (type: string), _col5 (type: string), _col17 (type: string)
46  outputColumnNames: _col5, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col5, _col17
47  Statistics: Num rows: 28202695901 Data size: 11760918476803 Basic stats: COMPLETE Column stats: NONE
48  Group By Operator
49  keys: _col9 (type: string), _col10 (type: string), _col11 (type: string), _col12 (type: bigint), _col13 (type: string), _col14 (type: string), _col15 (type: date), _col16 (type: string), _col5 (type: string), _col17 (type: string)
50  mode: hash
51  outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9
52  Statistics: Num rows: 28202695901 Data size: 11760918476803 Basic stats: COMPLETE Column stats: NONE
53  File Output Operator
54  compressed: true
55  table:
56  input format: org.apache.hadoop.mapred.SequenceFileInputFormat
57  output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
58  serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
59  Stage: Stage-2
60  Map Reduce
61  Map Operator Tree:
62  TableScan
63  Reduce Output Operator
64  key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint), _col4 (type: string), _col5 (type: string), _col6 (type: date), _col7 (type: string), _col8 (type: string), _col9 (type: string)
65  sort order: ++++++++++
66  Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint), _col4 (type: string), _col5 (type: string), _col6 (type: date), _col7 (type: string), _col8 (type: string), _col9 (type: string)
67  Statistics: Num rows: 28202695901 Data size: 11760918476803 Basic stats: COMPLETE Column stats: NONE
68  Reduce Operator Tree:
69  Group By Operator
70  keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string), KEY._col3 (type: bigint), KEY._col4 (type: string), KEY._col5 (type: string), KEY._col6 (type: date), KEY._col7 (type: string), KEY._col8 (type: string), KEY._col9 (type: string)
71  mode: mergepartial
72  outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9
73  Statistics: Num rows: 14101347950 Data size: 5880459238192 Basic stats: COMPLETE Column stats: NONE
74  File Output Operator
75  compressed: true
76  table:
77  input format: org.apache.hadoop.mapred.SequenceFileInputFormat
78  output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
79  serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
80  Stage: Stage-3
81  Map Reduce
82  Map Operator Tree:
83  TableScan
84  Reduce Output Operator
85  sort order:
86  Map-reduce partition columns: _col9 (type: string)
87  Statistics: Num rows: 14101347950 Data size: 5880459238192 Basic stats: COMPLETE Column stats: NONE
88  value expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint), _col4 (type: string), _col5 (type: string), _col6 (type: date), _col7 (type: string), _col8 (type: string), _col9 (type: string)
89  Reduce Operator Tree:
90  Select Operator
91  expressions: UDFToLong(VALUE._col0) (type: bigint), VALUE._col1 (type: string), VALUE._col2 (type: string), VALUE._col3 (type: bigint), VALUE._col4 (type: string), VALUE._col5 (type: string), VALUE._col6 (type: date), VALUE._col7 (type: string), VALUE._col8 (type: string), VALUE._col9 (type: string)
92  outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9
93  Statistics: Num rows: 14101347950 Data size: 5880459238192 Basic stats: COMPLETE Column stats: NONE
94  File Output Operator
95  compressed: false
96  Statistics: Num rows: 14101347950 Data size: 5880459238192 Basic stats: COMPLETE Column stats: NONE
97  table:
efzxgjgh

efzxgjgh1#

一些建议:
改进过滤。计算脚本之外的日期,如果可能,传递已经计算的日期。 predicate 中的函数可能会阻止分区修剪。 EXPLAIN DEPENDENCY 提供有关正在读取哪些分区的详细信息。检查分区修剪是否有效。
改进加入策略。值得一试的是sort merge bucket join或sort merge bucket map join。如果两个表都按相同的列进行了绑定和排序,并且具有相同的bucket数,则可以尝试添加以下设置:

set hive.auto.convert.sortmerge.join=true;
    set hive.optimize.bucketmapjoin = true;
    set hive.optimize.bucketmapjoin.sortedmerge = true;
    set hive.auto.convert.sortmerge.join.noconditionaltask=true;
    set hive.auto.convert.join.noconditionaltask.size = 10000000; --can be increased
    set hive.mapjoin.smalltable.filesize=10000000; --can be increased

最后两个设置控制表在内存中的大小。不确定它是否适用于像您这样的表,设置太大的大小可能会导致oom异常。但是即使没有Map连接转换,sort-merge-bucket连接也会显著提高性能。检查添加这些设置后的计划,应该是sort merge bucket join操作符,而不是join操作符。请在此处查看有关所有设置的更多详细信息:languagemanual joinoptimization
矢量化和tez。你在mr上运行,没有矢量化。您将受益于tez和矢量化,请尝试以下设置:

set hive.execution.engine=tez;
    SET hive.optimize.ppd=true;          --works for ORC
    SET hive.vectorized.execution.enabled=true;
    SET hive.vectorized.execution.reduce.enabled=true;

最终减速器改进。如果输出表也已分区,请检查是否添加 distribute by <partition key> 最终会不会提高性能。如果分区密钥是均匀分布的而没有倾斜,那么添加DistributeBy将提高性能。尝试添加 DISTRIBUTE BY trans 在查询的和处。
加入前的早期聚合。例如,如果first\u visit表中每个联接键包含多行,并且联接后的行重复,则在联接前的子查询中使用row\u number或distinct等来消除重复。这可能会提高性能,但也可能会阻止排序合并bucket联接。
最好分别尝试所有这些改进,以检查它们如何影响性能,并最终结合那些改进性能的改进。

相关问题