任务失败

jaxagkaj  于 2021-06-21  发布在  Mesos
关注(0)|答案(2)|浏览(229)

我试图在一个mesos集群中运行一个spark应用程序,在这个集群中我有一个主节点和一个从节点。从机为mesos分配了8gb ram。主控正在运行spark mesos调度器。
我使用以下命令提交spark应用程序(这是一个流应用程序)。

spark-submit --master mesos://mesos-master:7077 --class com.verifone.media.ums.scheduling.spark.SparkBootstrapper --deploy-mode cluster scheduling-spark-0.5.jar

我看到下面的输出,它显示了它已成功提交。

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/09/01 12:52:38 INFO RestSubmissionClient: Submitting a request to launch an application in mesos://mesos-master:7077.
15/09/01 12:52:39 INFO RestSubmissionClient: Submission successfully created as driver-20150901072239-0002. Polling submission state...
15/09/01 12:52:39 INFO RestSubmissionClient: Submitting a request for the status of submission driver-20150901072239-0002 in mesos://mesos-master:7077.
15/09/01 12:52:39 INFO RestSubmissionClient: State of driver driver-20150901072239-0002 is now QUEUED.
15/09/01 12:52:40 INFO RestSubmissionClient: Server responded with CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.4.1",
  "submissionId" : "driver-20150901072239-0002",
  "success" : true
}

然而,这在mesos中失败了,当我查看spark集群ui时,我看到以下消息。

task_id { value: "driver-20150901070957-0001" } state: TASK_FAILED message: "" slave_id { value: "20150831-082639-167881920-5050-4116-S6" } timestamp: 1.441091399975446E9 source: SOURCE_SLAVE reason: REASON_MEMORY_LIMIT 11: "\305-^E\377)N\327\277\361:\351\fm\215\312"

似乎它与内存有关,但我不确定是否必须在这里配置一些东西才能使其正常工作。
更新我查看了slave中的mesos日志,看到以下消息。

E0901 07:56:26.086618  1284 fetcher.cpp:515] Failed to run mesos-fetcher: Failed to fetch all URIs for container '33183181-e91b-4012-9e21-baa37485e755' with exit status: 256

所以我认为这可能是因为spark executor的url,所以我修改了spark submit如下,并增加了驱动程序和从属程序的内存,但仍然看到了相同的错误。

spark-submit \
    --master mesos://mesos-master:7077 \
    --class com.verifone.media.ums.scheduling.spark.SparkBootstrapper \
    --deploy-mode cluster \
    --driver-memory 1G \
    --executor-memory 4G \
    --conf spark.executor.uri=http://d3kbcqa49mib13.cloudfront.net/spark-1.4.1-bin-hadoop2.6.tgz \
    scheduling-spark-0.5.jar

更新2
我遵循@hartem的建议(见评论)越过了这一点。任务现在正在运行,但实际的spark应用程序并没有在集群中运行。当我查看日志时,我看到以下内容。在最后一行之后,似乎Spark不再继续。

15/09/01 10:33:41 INFO SparkContext: Added JAR file:/tmp/mesos/slaves/20150831-082639-167881920-5050-4116-S8/frameworks/20150831-082639-167881920-5050-4116-0004/executors/driver-20150901103327-0002/runs/47339c12-fb78-43d6-bc8a-958dd94d0ccf/spark-1.4.1-bin-hadoop2.6/../scheduling-spark-0.5.jar at http://192.172.1.31:33666/jars/scheduling-spark-0.5.jar with timestamp 1441103621639
I0901 10:33:41.728466  4375 sched.cpp:157] Version: 0.23.0
I0901 10:33:41.730764  4383 sched.cpp:254] New master detected at master@192.172.1.10:7077
I0901 10:33:41.730908  4383 sched.cpp:264] No credentials provided. Attempting to register without authentication
tsm1rwdh

tsm1rwdh1#

你能在启动slave之前导出glogïu v=1,看看slave日志中有什么有趣的东西吗?我还会在slave工作目录下查找stdout和stderr文件,看看它们是否包含任何线索。

kpbpu008

kpbpu0082#

我遇到了类似的问题:slave找不到运行类文件(sparkpi)所需的jar。所以我给出了它工作的jar的httpurl,它要求jar被放置在分布式系统中,而不是本地文件系统中。

/home/centos/spark-1.6.1-bin-hadoop2.6/bin/spark-submit \
  --name SparkPiTestApp \
  --class org.apache.spark.examples.SparkPi \
  --master mesos://xxxxxxx:7077 \
  --deploy-mode cluster \
  --executor-memory 5G --total-executor-cores 30 \
  http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.4.0-SNAPSHOT.jar 100

相关问题