spark与kafka的集成,spark提交jar异常

rn0zuynd  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(363)

在ubuntu下提交将kafka与spark集成的jar文件时出现nullpointerexception。我正在试着运行代码https://github.com/apache/spark/tree/v2.1.1/examples
我试着检查在ubuntu下安装spark是否需要设置hadoop\u home;但是,hadoop\u home没有设置,仔细检查了jar的参数。

./bin/spark-submit --class "org.apache.spark.examples.streaming.JavaKafkaWordCount" --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.1.0 --master local[*] --jars ~/software/JavaKafkaWordCount.jar localhost:2181 test-consumer-group streams-plaintext-input 1

org.apache.hadoop.fs.path.getname(path。java:337)在org.apache.spark.deploy.dependencyutils$.downloadfile(dependencyutils。scala:136)在org.apache.spark.deploy.sparksubmit$$anonfun$preparesubmitenvironment$7.apply(sparksubmit。scala:367)在org.apache.spark.deploy.sparksubmit$$anonfun$preparesubmitenvironment$7.apply(sparksubmit。scala:367)在scala.option.map(option。scala:146)在org.apache.spark.deploy.sparksubmit.preparesubmitenvironment(sparksubmit。scala:366)在org.apache.spark.deploy.sparksubmit.submit(sparksubmit。scala:143)在org.apache.spark.deploy.sparksubmit.dosubmit(sparksubmit。scala:86)在org.apache.spark.deploy.sparksubmit$$anon$2.dosubmit(sparksubmit。scala:924)位于org.apache.spark.deploy.sparksubmit$.main(sparksubmit.com)。scala:933)位于org.apache.spark.deploy.sparksubmit.main(sparksubmit.scala)

9udxz4iz

9udxz4iz1#

您的路径uri jar不可理解,请参见dependencyutils.scala\l136

/**
   * Download a file from the remote to a local temporary directory. If the input path points to
   * a local path, returns it with no operation.
   *
   * @param path A file path from where the files will be downloaded.
   * @param targetDir A temporary directory for which downloaded files.
   * @param sparkConf Spark configuration.
   * @param hadoopConf Hadoop configuration.
   * @param secMgr Spark security manager.
   * @return Path to the local file.
   */
  def downloadFile(
      path: String,
      targetDir: File,
      sparkConf: SparkConf,
      hadoopConf: Configuration,
      secMgr: SecurityManager): String = {
    require(path != null, "path cannot be null.")
    val uri = Utils.resolveURI(path)

    uri.getScheme match {
      case "file" | "local" => path
      case "http" | "https" | "ftp" if Utils.isTesting =>
        // This is only used for SparkSubmitSuite unit test. Instead of downloading file remotely,
        // return a dummy local path instead.
        val file = new File(uri.getPath)
        new File(targetDir, file.getName).toURI.toString
      case _ =>
        val fname = new Path(uri).getName()
        val localFile = Utils.doFetchFile(uri.toString(), targetDir, fname, sparkConf, secMgr,
          hadoopConf)
        localFile.toURI().toString()
    }
  }

在你的星星之火中,你可以像这样改变参数 --jars /fullpath/JavaKafkaWordCount.jar 而不是 --jars ~/software/JavaKafkaWordCount.jar

相关问题