databricks spark\u jar\u任务在通过api提交时失败

c6ubokkw  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(587)

我正在使用提交一个示例spark\u jar\u任务
我的示例spark\u jar\u任务请求计算pi:

"libraries": [
    {
      "jar": "dbfs:/mnt/test-prd-foundational-projects1/spark-examples_2.11-2.4.5.jar"
    }
  ],
  "spark_jar_task": {
    "main_class_name": "org.apache.spark.examples.SparkPi"
  }

databricks sysout在日志中按预期打印pi值

....
(This session will block until Rserve is shut down) Spark package found in SPARK_HOME: /databricks/spark DATABRICKS_STDOUT_END-19fc0fbc-b643-4801-b87c-9d22b9e01cd2-1589148096455 
Executing command, time = 1589148103046. 
Executing command, time = 1589148115170. 
Pi is roughly 3.1370956854784273 
Heap
.....

spark\u jar\u task虽然在日志中打印pi值,但作业以失败状态终止,没有说明错误。下面是api/api/2.0/jobs/runs/list/?job\u id=23的响应。

{
    "runs": [
        {
            "job_id": 23,
            "run_id": 23,
            "number_in_job": 1,
            "state": {
                "life_cycle_state": "TERMINATED",
                "result_state": "FAILED",
                "state_message": ""
            },
            "task": {
                "spark_jar_task": {
                    "jar_uri": "",
                    "main_class_name": "org.apache.spark.examples.SparkPi",
                    "run_as_repl": true
                }
            },
            "cluster_spec": {
                "new_cluster": {
                    "spark_version": "6.4.x-scala2.11",
......
.......

为什么这里的工作失败了?如有任何建议,我们将不胜感激!
编辑:错误日志显示

20/05/11 18:24:15 INFO ProgressReporter$: Removed result fetcher for 740457789401555410_9000204515761834296_job-34-run-1-action-34
20/05/11 18:24:15 WARN ScalaDriverWrapper: Spark is detected to be down after running a command
20/05/11 18:24:15 WARN ScalaDriverWrapper: Fatal exception (spark down) in ReplId-a46a2-6fb47-361d2
com.databricks.backend.common.rpc.SparkStoppedException: Spark down: 
    at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:493)
    at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
    at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
    at java.lang.Thread.run(Thread.java:748)
20/05/11 18:24:17 INFO ShutdownHookManager: Shutdown hook called
cu6pst1q

cu6pst1q1#

我从这个帖子里找到了答案https://github.com/dotnet/spark/issues/126 看来,我们不应该故意打电话

spark.stop()

在databricks中作为jar运行时

相关问题