spark作业因java.lang.nosuchmethodexception失败:org.apache.hadoop.hive.ql.metadata.hive.loaddynamicpartitions

nkoocmlb  于 2021-06-28  发布在  Hive
关注(0)|答案(1)|浏览(761)

由于以下错误,我无法通过spark submit运行spark作业:

16/11/16 11:41:12 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path, java.lang.String, java.util.Map, boolean, int, boolean, boolean, boolean)
java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path, java.lang.String, java.util.Map, boolean, int, boolean, boolean, boolean)
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:114)
at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod$lzycompute(HiveShim.scala:404)
at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod(HiveShim.scala:403)
at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitions(HiveShim.scala:455)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(ClientWrapper.scala:562)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:562)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:281)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:228)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:227)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:270)
...

我使用的是spark 1.6.0和scala 2.10、hive 1.1.0,平台是cdh 5.7.1,版本与spark和hive相同。在类路径上传递给spark作业的hive exec是hive-exec-1.1.0-cdh5.7.1.jar。这个jar有一个等级 org.apache.hadoop.hive.ql.metadata.Hive 我可以看到有以下方法:

public java.util.Map<java.util.Map<java.lang.String, java.lang.String>, org.apache.hadoop.hive.ql.metadata.Partition> loadDynamicPartitions(org.apache.hadoop.fs.Path, java.lang.String, java.util.Map<java.lang.String, java.lang.String>, boolean, int, boolean, boolean, boolean) throws org.apache.hadoop.hive.ql.metadata.HiveException;

这和上面的不一样 org.apache.spark.sql.hive.client.ClientWrapper 类随我使用的库spark-hive2.10-1.6.0.jar一起提供,该类中相同方法的签名使用该类 org.apache.spark.sql.hive.client.HiveShim 使用此方法:

private lazy val loadDynamicPartitionsMethod =
findMethod(
  classOf[Hive],
  "loadDynamicPartitions",
  classOf[Path],
  classOf[String],
  classOf[JMap[String, String]],
  JBoolean.TYPE,
  JInteger.TYPE,
  JBoolean.TYPE,
  JBoolean.TYPE)

我还检查了hiveexecjar的历史记录,似乎类的签名 org.apache.hadoop.hive.ql.metadata.Hive 在版本1.0.0之后更改。我对spark还不熟悉,但在我看来spark配置单元库使用了一个旧的配置单元实现(我可以在jar中的meta-inf/dependencies文件中看到它声明了对org.spark-project的依赖关系)。hive:hive-exec:jar:1.2.1.spark). 有人知道如何设置spark作业来使用适当的Hive库吗?

lf5gs5x2

lf5gs5x21#

确保已设置以下设置

SET hive.exec.dynamic.partition=true; 
SET hive.exec.max.dynamic.partitions=2048
SET hive.exec.dynamic.partition.mode=nonstrict;

在spark中,您可以如下设置配置单元上下文

hiveCtx.setConf("hive.exec.dynamic.partition","true")
hiveCtx.setConf("hive.exec.max.dynamic.partitions","2048")
hiveCtx.setConf("hive.exec.dynamic.partition.mode", "nonstrict")

如果问题仍然存在,我猜这意味着你使用的spark版本与你尝试运行spark submit的环境不匹配…你可以尝试在spark shell中运行你的程序,如果它工作,那么尝试将spark版本与环境设置对齐。
您可以如下或pom设置对您的sbt的依赖性

libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.3"
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.6.3"
libraryDependencies += "org.apache.spark" % "spark-hive_2.10" % "1.6.3"
libraryDependencies += "org.apache.hive" % "hive-exec" % "1.1.0"

请参考https://mvnrepository.com/artifact/org.apache.spark
您可以使用下面的命令spark\u print\u launch\u command=true spark shell来获取环境设置
另一种方法是使用spark分区来保存数据

dataframe.write.mode("overwrite").partitionBy("col1", "col2").json("//path")

相关问题