在配置单元sql中使用insert-into时限制文件号

x33g5p2x  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(330)

每次在配置单元sql中执行insert-into时,都会创建一个文件,使用insert-into时如何限制文件号?
我担心hdfs系统中太多的文件有一天会破坏它。

hive> insert into table bi_st.st_usr_member_active_day
    > select * from bi_temp.zjy_ini_st_usr_member_active_day_temp88;
Query ID = root_20170209100404_5acdd3bf-071d-4178-aeff-b40d16499aac
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1484675879577_4078, Tracking URL = http://hadoopmaster:8088/proxy/application_1484675879577_4078/
Kill Command = /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop job  -kill job_1484675879577_4078
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 2
2017-02-09 10:04:41,247 Stage-1 map = 0%,  reduce = 0%
2017-02-09 10:04:47,425 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.17 sec
2017-02-09 10:04:53,598 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 3.02 sec
2017-02-09 10:04:57,727 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.81 sec
MapReduce Total cumulative CPU time: 4 seconds 810 msec
Ended Job = job_1484675879577_4078
Loading data to table bi_st.st_usr_member_active_day
Table bi_st.st_usr_member_active_day stats: [numFiles=8, numRows=548, totalSize=31267, rawDataSize=0]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 2   Cumulative CPU: 4.81 sec   HDFS Read: 56745 HDFS Write: 10220 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 810 msec
OK
5q4ezhmt

5q4ezhmt1#

看看这个,解释得很详细http://www.openkb.info/2014/12/how-to-control-file-numbers-of-hive.html

相关问题