hive:java.lang.outofmemoryerror:java堆空间和正在进程中运行的作业(本地hadoop)

xesrikrc  于 2021-05-29  发布在  Hadoop
关注(0)|答案(3)|浏览(325)

我的设置:在运行nixos-linux的google云平台上有4个节点的集群(1个主节点,3个工作节点)。
我一直在使用tpcds工具箱来生成数据和查询。在更小的数据集/更简单的查询上,它们工作得很好。我在这里提出的问题是:https://github.com/hortonworks/hive-testbench/tree/hdp3/sample-queries-tpcds
这是第一个, query1.sql :

WITH customer_total_return AS 
( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk) 
SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 
AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

一开始我有一个问题,就是根本无法成功地运行这个程序,遇到了麻烦 java.lang.OutOfMemoryError: Java heap space .
我所做的是:
gcp节点功率增加(高达7.5 gb的ram和双核CPU)
在配置单元cli中设置以下变量:

set mapreduce.map.memory.mb=2048;
set mapreduce.map.java.opts=-Xmx1024m;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.reduce.java.opts=-Xmxe3072m;
set mapred.child.java.opts=-Xmx1024m;

重新启动的Hive
然后这个查询在处理1GB数据集时起作用(与其他类似的查询一起)。我已经监控了htop的情况,内存使用量不超过2gb,而两个cpu核心几乎一直都在100%使用。
现在的问题是,当涉及到更复杂的查询和更大的数据集时,错误又开始了:
查询运行了整整一分钟左右,但以失败告终。完整堆栈跟踪:

hive> with customer_total_return as
    > (select sr_customer_sk as ctr_customer_sk
    > ,sr_store_sk as ctr_store_sk
    > ,sum(SR_FEE) as ctr_total_return
    > from store_returns
    > ,date_dim
    > where sr_returned_date_sk = d_date_sk
    > and d_year =2000
    > group by sr_customer_sk
    > ,sr_store_sk)
    >  select c_customer_id
    > from customer_total_return ctr1
    > ,store
    > ,customer
    > where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
    > from customer_total_return ctr2
    > where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
    > and s_store_sk = ctr1.ctr_store_sk
    > and s_state = 'TN'
    > and ctr1.ctr_customer_sk = c_customer_sk
    > order by c_customer_id
    > limit 100;
No Stats for default@store_returns, Columns: sr_returned_date_sk, sr_fee, sr_store_sk, sr_customer_sk
No Stats for default@date_dim, Columns: d_date_sk, d_year
No Stats for default@store, Columns: s_state, s_store_sk
No Stats for default@customer, Columns: c_customer_sk, c_customer_id
Query ID = root_20190811164854_c253c67c-ef94-4351-b4d3-74ede4c5d990
Total jobs = 14
Stage-29 is selected by condition resolver.
Stage-1 is filtered out by condition resolver.
Stage-30 is selected by condition resolver.
Stage-10 is filtered out by condition resolver.
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
Execution completed successfully
MapredLocal task succeeded
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:19,415 Stage-20 map = 0%,  reduce = 0%
2019-08-11 16:49:22,418 Stage-20 map = 100%,  reduce = 0%
Ended Job = job_local404291246_0005
Launching Job 4 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:24,718 Stage-22 map = 0%,  reduce = 0%
2019-08-11 16:49:27,721 Stage-22 map = 100%,  reduce = 0%
Ended Job = job_local566999875_0006
Launching Job 5 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:29,958 Stage-2 map = 0%,  reduce = 0%
2019-08-11 16:49:33,970 Stage-2 map = 100%,  reduce = 0%
2019-08-11 16:49:35,974 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local1440279093_0007
Launching Job 6 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:37,235 Stage-11 map = 0%,  reduce = 0%
2019-08-11 16:49:40,421 Stage-11 map = 100%,  reduce = 0%
2019-08-11 16:49:42,424 Stage-11 map = 100%,  reduce = 100%
Ended Job = job_local1508103541_0008
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

2019-08-11 16:49:51 Dump the side-table for tag: 1 with group count: 21 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10019/HashTable-Stage-19/MapJoin-mapfile71--.hashtable
Execution completed successfully
MapredLocal task succeeded
Launching Job 7 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:53,956 Stage-19 map = 100%,  reduce = 0%
Ended Job = job_local2121921517_0009
Stage-26 is filtered out by condition resolver.
Stage-27 is selected by condition resolver.
Stage-4 is filtered out by condition resolver.

2019-08-11 16:50:01 Dump the side-table for tag: 0 with group count: 99162 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable
2019-08-11 16:50:02 Uploaded 1 File to: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable (2832042 bytes)
Execution completed successfully
MapredLocal task succeeded
Launching Job 9 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:04,004 Stage-17 map = 0%,  reduce = 0%
2019-08-11 16:50:05,005 Stage-17 map = 100%,  reduce = 0%
Ended Job = job_local694362009_0010
Stage-24 is selected by condition resolver.
Stage-25 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.

SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2019-08-11 16:50:12 Starting to launch local task to process map join;  maximum memory = 239075328
Execution completed successfully
MapredLocal task succeeded
Launching Job 11 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:14,254 Stage-13 map = 100%,  reduce = 0%
Ended Job = job_local1812693452_0011
Launching Job 12 out of 14
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:50:15,481 Stage-6 map = 0%,  reduce = 0%
Ended Job = job_local920309638_0012 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-20:  HDFS Read: 8662606197 HDFS Write: 0 SUCCESS
Stage-Stage-22:  HDFS Read: 9339349675 HDFS Write: 0 SUCCESS
Stage-Stage-2:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-11:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-19:  HDFS Read: 4704638883 HDFS Write: 0 SUCCESS
Stage-Stage-17:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-13:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-6:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

hive.log文件中的问题仍然是相同的:

java.lang.Exception: java.lang.OutOfMemoryError: Java heap space

我意识到我的工作节点实际上什么都不做(htop显示它们是空闲的,而只有主节点在工作),甚至在堆栈跟踪中:

Job running in-process (local Hadoop)

如何让hive使用hdfs而不仅仅是本地hadoop?跑步 hdfs dfs -df -h hdfs:<redacted>:9000/ 退货

Filesystem                   Size    Used  Available  Use%
hdfs://<redacted>:9000  88.5 G  34.3 G     35.2 G   39%

这是正确的,我有3个工作节点,每个节点有30 gb的磁盘。

unhi4e5o

unhi4e5o1#

java.lang.OutOfMemoryError: Java heap space 如果您试图在一台机器上推送太多数据,就会发生这种情况。
根据提供的查询,您可以尝试以下几件事:
将连接条件更改为显式(删除where子句并使用inner/left连接)。例如

FROM     customer_total_return ctr1 
         INNER JOIN store s
             ON ctr1.ctr_store_sk = s.s_store_sk
                AND s_state = 'NM'
         INNER JOIN customer c
             ON ctr1.ctr_customer_sk = c.c_customer_sk

检查是否有以下字段之一的倾斜数据:
商店退货->sr退货日期
存储返回->存储磁盘
商店退货->高级客户
客户->客户
商店->商店
其中一个键可能具有高百分比的值,这可能会导致1个节点过载(当数据量很大时)。
基本上,您正在尝试消除节点过载的可能原因。
如果有用请告诉我。

hivapdat

hivapdat2#

可能是资源问题。配置单元查询作为map reduce作业在内部执行。您可以检查配置单元Map的作业历史记录日志,以减少失败的作业。有时,与配置单元查询编辑器相比,从shell执行查询更快。

ac1kyiln

ac1kyiln3#

oom问题通常与查询性能有关。
这里有两个查询:
第1部分:

WITH customer_total_return AS 

( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk)

第2部分:

SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 

AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

尝试为配置单元群集链接启用jmx
并查看查询的两个部分的内存使用情况。以及第2部分的内部查询。
可以尝试对上述查询进行一些配置单元优化:
使用sort by而不是order by子句->sort by子句,该子句只对每个缩减器中的数据进行排序。
对连接键上的表进行分区,以便只读取特定的数据,而不是对整个表进行扫描。
在分布式缓存中缓存小配置单元表,并使用map-side-join来减少洗牌,例如: select /*+MAPJOIN(b)*/ col1,col2,col3,col4 from table_A a join table_B b on a.account_number=b.account_number 如果任何表中存在数据倾斜的可能性,则使用以下参数: set hive.optimize.skewjoin=true; set hive.skewjoin.key=100000; (即数据的阈值应该去一个节点)

相关问题