在spark安装中配置spark\u local\u ip和spark\u master\u ip

hrirmatl  于 2021-05-27  发布在  Hadoop
关注(0)|答案(0)|浏览(247)

我试图在hadoopYarn上安装spark,我得到的错误,我相信是由于配置错误。我在ubuntu上安装了一个功能齐全的hadoop。当我执行spark submit命令或spark shell命令时,我得到以下错误。我想知道我是否在相应的文件中正确设置了ip地址?目前我对hadoop和spark使用相同的ip。当我想配置spark使用hdfs和yarn时,我是否需要在spark-env.sh中为spark\u local\u ip和spark\u master\u ip提供单独的ip地址?
error spark.sparkcontext:初始化sparkcontext时出错。java.net.connectexception:从hadoop virtualbox/127.0.1.1调用hadoop-virtualbox:9000 failed on连接异常:java.net.connectexception:连接被拒绝;
以下是我使用的软件版本

Ubuntu: 18.01.1 LTS
Hadoop: 3.0.3
Spark: 2.44
Scala: 2.12.0
Java: 1.8.0

我从这个链接下载了spark for hadoop的预构建版本。以下是/etc/hosts.txt中提供给hadoop的ip

127.0.0.1   localhost
127.0.1.1   hadoop-VirtualBox  #hadoop node master

.profile配置文件(我在.profile而不是.bashrc中设置了我的环境)

export PATH=$PATH:/usr/local/hadoop/bin/:/usr/local/hadoop/sbin/
export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH

PATH=/usr/local/Spark/bin:$PATH

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export YARN_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_HOME=/usr/local/Spark
export LD_LIBRARY_PATH=/usr/local/hadoop/lib/native:$LD_LIBRARY_PATH

export SCALA_HOME=/usr/local/Scala
export PATH=$SCALA_HOME:bin:$PATH

spark-env.sh公司

export SCALA_HOME=/usr/local/Scala
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_INSTANCES=2
export SPARK_MASTER_IP=127.0.1.1

# export SPARK_MASTER_PORT=9000

export SPARK_WORKER_DIR=/usr/local/Spark/tmp

# Options read in YARN client mode

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_CONF_DIR=/usr/local/Spark/conf
export YARN_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_EXECUTOR_CORES=2
export SPARK_EXECUTOR_MEMORY=1G
export SPARK_DRIVER_MEMORY=1G
export SPARK_YARN_APP_NAME=Spark

spark-default.conf文件

spark.master                     yarn
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://hadoop-VirtualBox:9000/spark-logs
spark.yarn.am.memory         512m
spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.yarn.jars          hdfs://hadoop-VirtualBox:9000/spark-jars

首先启动hadoop服务,然后启动spark服务,如下所示:

start-dfs.sh
start-yarn.sh
jps
hdfs dfs -mkdir /spark-logs

hdfs dfs -mkdir /spark-jars

# spark-jars.zip is a zip file of the jars folder in $SPARK_HOME

hdfs dfs -put /usr/local/Spark/spark-jars.zip /spark-jars

cd /usr/local/Spark/sbin
./start-all.sh

spark-submit --class org.apache.spark.examples.JavaSparkPi --master yarn --deploy-mode client /usr/local/Spark/examples/jars/spark-examples_2.11-2.4.4.jar 10

以下是终端中的跟踪。

2019-10-20 11:55:39,512 WARN util.Utils: Your hostname, hadoop-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
2019-10-20 11:55:39,519 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
2019-10-20 11:55:43,942 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-10-20 11:55:47,883 INFO spark.SparkContext: Running Spark version 2.4.4
2019-10-20 11:55:48,135 INFO spark.SparkContext: Submitted application: JavaSparkPi
2019-10-20 11:55:48,858 INFO spark.SecurityManager: Changing view acls to: hadoop
2019-10-20 11:55:48,858 INFO spark.SecurityManager: Changing modify acls to: hadoop
2019-10-20 11:55:48,859 INFO spark.SecurityManager: Changing view acls groups to: 
2019-10-20 11:55:48,859 INFO spark.SecurityManager: Changing modify acls groups to: 
2019-10-20 11:55:48,859 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
2019-10-20 11:55:50,722 INFO util.Utils: Successfully started service 'sparkDriver' on port 44765.
2019-10-20 11:55:53,863 INFO spark.SparkEnv: Registering MapOutputTracker
2019-10-20 11:55:54,364 INFO spark.SparkEnv: Registering BlockManagerMaster
2019-10-20 11:55:54,395 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2019-10-20 11:55:54,407 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
2019-10-20 11:55:55,024 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-75df7314-58f9-4c97-b827-e66072015afa
2019-10-20 11:55:55,815 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
2019-10-20 11:55:56,962 INFO spark.SparkEnv: Registering OutputCommitCoordinator
2019-10-20 11:55:58,780 INFO util.log: Logging initialized @26940ms
2019-10-20 11:56:00,794 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown
2019-10-20 11:56:01,372 INFO server.Server: Started @29549ms
2019-10-20 11:56:01,754 INFO server.AbstractConnector: Started ServerConnector@6b648010{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-10-20 11:56:01,772 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
2019-10-20 11:56:02,378 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ac4944a{/jobs,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,422 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e34c607{/jobs/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,468 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5215cd9a{/jobs/job,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,528 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@31198ceb{/jobs/job/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,596 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@9257031{/stages,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,656 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@75201592{/stages/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,672 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7726e185{/stages/stage,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,721 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5dda14d0{/stages/stage/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,759 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1db0ec27{/stages/pool,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,815 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d9fc57a{/stages/pool/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,855 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@d4ab71a{/storage,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,923 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b4ef7{/storage/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,941 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1af05b03{/storage/rdd,null,AVAILABLE,@Spark}
2019-10-20 11:56:02,982 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5987e932{/storage/rdd/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,051 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1ad777f{/environment,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,098 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5bbbdd4b{/environment/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,135 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@438bad7c{/executors,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,160 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@25230246{/executors/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,194 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fdf8f12{/executors/threadDump,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,234 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a8b5227{/executors/threadDump/json,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,479 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@54f5f647{/static,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,503 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2899a8db{/,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,559 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e8823d2{/api,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,602 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4c432866{/jobs/job/kill,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,657 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@12365c88{/stages/stage/kill,null,AVAILABLE,@Spark}
2019-10-20 11:56:03,776 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
2019-10-20 11:56:04,336 INFO spark.SparkContext: Added JAR file:/usr/local/Spark/examples/jars/spark-examples_2.11-2.4.4.jar at spark://10.0.2.15:44765/jars/spark-examples_2.11-2.4.4.jar with timestamp 1571568964292
2019-10-20 11:56:15,228 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2019-10-20 11:56:19,448 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
2019-10-20 11:56:20,449 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
2019-10-20 11:56:20,455 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
2019-10-20 11:56:20,476 INFO yarn.Client: Setting up container launch context for our AM
2019-10-20 11:56:20,553 INFO yarn.Client: Setting up the launch environment for our AM container
2019-10-20 11:56:20,753 INFO yarn.Client: Preparing resources for our AM container
2019-10-20 11:56:21,859 INFO yarn.Client: Deleted staging directory hdfs://localhost:9000/user/hadoop/.sparkStaging/application_1571568174433_0002
2019-10-20 11:56:21,887 ERROR spark.SparkContext: Error initializing SparkContext.
java.net.ConnectException: Call From hadoop-VirtualBox/127.0.1.1 to hadoop-VirtualBox:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:528)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:524)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:524)
    at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:865)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:183)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
    at org.apache.spark.examples.JavaSparkPi.main(JavaSparkPi.java:37)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    ... 47 more
2019-10-20 11:56:22,320 INFO server.AbstractConnector: Stopped Spark@6b648010{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-10-20 11:56:22,378 INFO ui.SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
2019-10-20 11:56:22,721 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
2019-10-20 11:56:22,922 INFO cluster.YarnClientSchedulerBackend: Stopped
2019-10-20 11:56:23,156 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
2019-10-20 11:56:23,355 INFO memory.MemoryStore: MemoryStore cleared
2019-10-20 11:56:23,365 INFO storage.BlockManager: BlockManager stopped
2019-10-20 11:56:23,566 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
2019-10-20 11:56:23,569 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
2019-10-20 11:56:23,627 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
2019-10-20 11:56:23,725 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.net.ConnectException: Call From hadoop-VirtualBox/127.0.1.1 to hadoop-VirtualBox:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
    at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:528)
    at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:524)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:524)
    at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:865)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:183)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
    at org.apache.spark.examples.JavaSparkPi.main(JavaSparkPi.java:37)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    ... 47 more
2019-10-20 11:56:23,899 INFO util.ShutdownHookManager: Shutdown hook called
2019-10-20 11:56:23,913 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-1659f92f-aa82-4f31-9183-f9b95d9375e3
2019-10-20 11:56:23,946 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-f353495e-4f40-48b2-91a3-a3e2caeb3500

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题