配置单元执行引擎—spark—noclassdeffounderror—sparkcounters

bbmckpt7  于 2021-06-26  发布在  Hive
关注(0)|答案(0)|浏览(196)

我正在尝试从配置单元上的表运行一个简单的select count(*)。然而,我得到一个错误后,状态更改为启动。
hadoop版本--2.6.2 spark版本--1.6.2 hive版本--2.0.1
以下是完整的流程

hive> select count(*) from customers;
Query ID = hadoop_20170302215442_6493f265-3121-4eaf-b2d6-eea5b08ae591
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Spark Job = b22ec8e6-4b11-49fe-9fdd-001aa7248084
state = SENT
state = SENT
state = SENT
state = SENT
state = SENT
state = SENT
state = SENT
state = STARTED
state = STARTED
state = STARTED

Query Hive on Spark job[0] stages:
0
1

Status: Running (Hive on Spark job[0])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-    FailedTasksCount)/TotalTasksCount [StageCost]
2017-03-02 21:55:02,526 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
state = STARTED
2017-03-02 21:55:03,537 Stage-0_0: 0(+1,-1)/1   Stage-1_0: 0/1
state = FAILED
Status: Failed
FAILED: Execution Error, return code 3 from     org.apache.hadoop.hive.ql.exec.spark.SparkTask

在Yarn日志文件中,我看到以下错误。。

Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slcag034.us.oracle.com): java.lang.NoClassDefFoundError: Lorg/apache/hive/spark/counter/SparkCounters;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)

我已经在hive-site.xml中设置了spark.driver.extraclasspath

<property>
        <name>spark.driver.extraClassPath</name>
        <value>file:///***/***/binary/apache-hive-2.0.1-bin/lib/hive-exec-2.0.1.jar</value>
    </property>

有人能告诉我出了什么问题吗。
当做
巴拉

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题