scala 解释NoSuchMethodError

cxfofazt  于 7个月前  发布在  Scala
关注(0)|答案(1)|浏览(76)

我在编译时运行一些不同的代码时遇到了这种情况

java.lang.NoSuchMethodError: 'boolean org.apache.spark.sql.catalyst.plans.physical.ClusteredDistribution$.apply$default$2()'

编译时的类:

case class ClusteredDistribution(
    clustering: Seq[Expression],
    requiredNumPartitions: Option[Int] = None) extends Distribution {
  require(
    clustering != Nil,
    "The clustering expressions of a ClusteredDistribution should not be Nil. " +
      "An AllTuples should be used to represent a distribution that only has " +
      "a single partition.")
}

运行时的类:

case class ClusteredDistribution(
    clustering: Seq[Expression],
    requireAllClusterKeys: Boolean = SQLConf.get.getConf(
      SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_DISTRIBUTION),
    requiredNumPartitions: Option[Int] = None) extends Distribution {
  require(
    clustering != Nil,
    "The clustering expressions of a ClusteredDistribution should not be Nil. " +
      "An AllTuples should be used to represent a distribution that only has " +
      "a single partition.")
}

这个异常是因为参数requireAllClusterKeys在编译时不存在而抱怨参数SQLConf.get.getConf( SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_DISTRIBUTION)的默认值吗?我假设是这种情况,因为否则我不知道该异常中的boolean如何相关。基本上,我认为发生的情况是,这个case类在运行时使用一个boolean参数示例化,但编译后的jar中的类没有boolean参数,因此无法找到被调用的方法
我一直在阅读其他关于解释这些异常的文章,比如this post,但我的不同之处在于名称空间以boolean开头,我不知道如何解释.apply$default$2()(我假设这只是抱怨apply方法中的默认参数)

xhv8bpkk

xhv8bpkk1#

NoSuchMethodError的javadoc
如果应用程序试图调用类(静态或示例)的指定方法,而该类不再具有该方法的定义,则抛出。
通常情况下,这个错误会被编译器捕获;此错误仅在类的定义发生不兼容更改时才会在运行时发生。
根据问题的细节,我猜你之所以能够编译这个项目,是因为你在编译时设置了正确的依赖项,但在运行时你使用了另一个版本。

  • scala spark sql 3.2.4:QuarteredDistribution(类似于编译时的Distribution)
case class ClusteredDistribution(
    clustering: Seq[Expression],
    requiredNumPartitions: Option[Int] = None) extends Distribution {
  require(
    clustering != Nil,
    "The clustering expressions of a ClusteredDistribution should not be Nil. " +
      "An AllTuples should be used to represent a distribution that only has " +
      "a single partition.")
  • scala spark sql 3.4.1:QuarteredDistribution(类似于运行时的Distribution)
case class ClusteredDistribution(
    clustering: Seq[Expression],
    requireAllClusterKeys: Boolean = SQLConf.get.getConf(
      SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_DISTRIBUTION),
    requiredNumPartitions: Option[Int] = None) extends Distribution {
  require(
    clustering != Nil,
    "The clustering expressions of a ClusteredDistribution should not be Nil. " +
      "An AllTuples should be used to represent a distribution that only has " +
      "a single partition.")

正如您所看到的,每个case class的构造函数是不同的。他们的签名不一样。因此,如果您使用一个版本编译项目,而在运行时您有另一个版本,其中它们没有相同的签名,则可能会导致这种类型的错误。
您希望了解错误消息,该消息说

java.lang.NoSuchMethodError: 'boolean org.apache.spark.sql.catalyst.plans.physical.ClusteredDistribution$.apply$default$2()'

让我们尝试抛出相同的错误消息。为此,我们可以创建一个包含两个文件的项目compile

  • Main.scala
object Main extends App {
  new ClusteredDistribution(Seq("Hello"))
}
  • ClusteredDistribution.scala(类似于编译时的类)
class ClusteredDistribution(
  clustering: Seq[String],
  requireAllClusterKeys: Boolean = true,
  requiredNumPartitions: Option[Int] = None
)

然后使用scalac编译项目

scalac *.scala

然后,仅使用从runtime项目生成的*.class文件创建另一个名为runtime的项目

cp compile/*.class runtime/

runtime项目中,使用以下代码创建新的ClusteredDistribution.scala

class ClusteredDistribution(
  clustering: Seq[String],
  requiredNumPartitions: Option[Int] = None
)

只编译这个类

scalac *.scala

然后运行Main类,

scala Main

这是一个类似的错误消息

java.lang.NoSuchMethodError: 'boolean ClusteredDistribution$.$lessinit$greater$default$2()'

关于How can I see in what [Java/Scala?] code does Scala compiler rewrites original Scala-code
使用“scalac -print”编译它,您将得到以下Scala代码
如果我们尝试用那个标志编译我们在前一步中创建的文件,这就是你得到的结果

  • compile<init>$default$2(): Boolean
[[syntax trees at end of                   cleanup]] // ClusteredDistribution.scala
package <empty> {
  class ClusteredDistribution extends Object {
    def <init>(clustering: Seq, requireAllClusterKeys: Boolean, requiredNumPartitions: Option): ClusteredDistribution = {
      ClusteredDistribution.super.<init>();
      ()
    }
  };
  <synthetic> object ClusteredDistribution extends Object {
    <synthetic> def <init>$default$2(): Boolean = true;
    <synthetic> def <init>$default$3(): Option = scala.None;
    def <init>(): ClusteredDistribution.type = {
      ClusteredDistribution.super.<init>();
      ()
    }
  }
}
  • runtime<init>$default$2(): Option
[[syntax trees at end of                   cleanup]] // ClusteredDistribution.scala
package <empty> {
  class ClusteredDistribution extends Object {
    def <init>(clustering: Seq, requiredNumPartitions: Option): ClusteredDistribution = {
      ClusteredDistribution.super.<init>();
      ()
    }
  };
  <synthetic> object ClusteredDistribution extends Object {
    <synthetic> def <init>$default$2(): Option = scala.None;
    def <init>(): ClusteredDistribution.type = {
      ClusteredDistribution.super.<init>();
      ()
    }
  }
}

从那里,我们可以观察到$default$2中的数字与您放置默认值的参数的位置有关。您还可以看到,compile项目中的$default$3属于该类的第三个参数。
您可以发现,在http4s 0.23.23中,为了保持与http4s 0.23.12的二进制兼容性,他们在Multipart伴随对象中做了以下操作

object Multipart {
  @deprecated("Retaining for binary-compatibility", "0.23.12")
  def `<init>$default$2`: String = apply$default$2
  @deprecated("Retaining for binary-compatibility", "0.23.12")
  def apply$default$2: String = Boundary.unsafeCreate().value

  @deprecated(
    "Creating a boundary is an effect.  Use Multiparts.multipart to generate an F[Multipart[F]], or call the two-parameter apply with your own boundary.",
    "0.23.12",
  )
  def apply[F[_]](parts: Vector[Part[F]]) = new Multipart(parts)
}

相关问题