为什么向mesos提交spark应用程序失败并导致“无法加载本机mesos库”?

x3naxklr  于 2021-06-26  发布在  Mesos
关注(0)|答案(1)|浏览(305)

当我尝试向mesos群集提交spark应用程序时,出现以下异常:

/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 40: export: `/usr/local/lib/libmesos.so': not a valid identifier
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 41: export: `hdfs://spark-2.2.0-bin-hadoop2.7.tgz': not a valid identifier
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/30 14:17:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/30 14:17:31 WARN Utils: Your hostname, knoldus resolves to a loopback address: 127.0.1.1; using 192.168.0.111 instead (on interface wlp6s0)
17/09/30 14:17:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Failed to load native Mesos library from 
java.lang.UnsatisfiedLinkError: Expecting an absolute path of the library:
  at java.lang.Runtime.load0(Runtime.java:806)
  at java.lang.System.load(System.java:1086)
  at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:159)
  at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:188)
  at org.apache.mesos.MesosSchedulerDriver.<clinit>(MesosSchedulerDriver.java:61)
  at org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils$class.createSchedulerDriver(MesosSchedulerUtils.scala:104)
  at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.createSchedulerDriver(MesosCoarseGrainedSchedulerBackend.scala:49)
  at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.start(MesosCoarseGrainedSchedulerBackend.scala:170)
  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
  ... 47 elided

我已经建立了Spark使用

./build/mvn -Pmesos -DskipTests clean package

我在spark-env.sh中设置了以下属性:

export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz

在spark-defaults.conf中:

spark.executor.uri  hdfs://spark-2.2.0-bin-hadoop2.7.tgz
n1bvdmb6

n1bvdmb61#

我已经解决了这个问题。问题是导出路径时不应该有空间。

export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz

例如

export foo = bar

shell将其解释为请求导出三个名称: foo , = 以及 bar . = 不是有效的变量名,因此命令失败。变量名、等号及其值不能用空格分隔,以便将它们作为同时赋值和导出进行处理。删除空格。

export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
 export SPARK_EXECUTOR_URI=hdfs://spark-2.2.0-bin-hadoop2.7.tgz

相关问题