java网关进程在发送其端口号spark之前退出

d8tt03nd  于 2021-05-29  发布在  Spark
关注(0)|答案(2)|浏览(484)

我正试图用anaconda在windows10中安装spark,但在jupyternotebook中运行pyspark时出错。我遵循本教程中的步骤。然后,我已经下载了Java8并安装了Spark3.0.0和Hadoop2.7。
我已经为spark\u home、java\u home设置了路径,并在“path”环境中包含了“/bin”路径。

C:\Users\mikes>java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

在水蟒Pypspark的能量壳里它能工作。

(base) PS C:\Users\mikes> pyspark
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
20/06/05 07:14:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
    ____              __
   / __/__  ___ _____/ /__
 _ \ \/ _ \/ _ `/ __/  '_/
/__ / .__/\_,_/_/ /_/\_\   version 3.0.0-preview2
   /_/

Using Python version 3.6.5 (default, Mar 29 2018 13:32:41)
SparkSession available as 'spark'.
>>>
>>> nums = sc.parallelize([1,2,3,4])
>>> nums.map(lambda x: x*x).collect()
[1, 4, 9, 16]
>>>

netx step在我的jupyter笔记本上运行pyspark。我已经安装了 findspark 然后,我的代码开始于:

import findspark
findspark.init('c:\spark\spark-3.0.0-preview2-bin-hadoop2.7')

# doesent work findspark.init() is necessary write the path.

findspark.find()
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession

conf = pyspark.SparkConf().setAppName('appName').setMaster('local')
sc = pyspark.SparkContext(conf=conf) #Here is the error
spark = SparkSession(sc)

错误显示:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-6-c561ad39905c> in <module>()
      4 conf = pyspark.SparkConf().setAppName('appName').setMaster('local')
      5 sc = pyspark.SparkConf()
----> 6 sc = pyspark.SparkContext(conf=conf)
      7 spark = SparkSession(sc)

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    125                 " is not allowed as it is a security risk.")
    126 
--> 127         SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
    128         try:
    129             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\context.py in _ensure_initialized(cls, instance, gateway, conf)
    317         with SparkContext._lock:
    318             if not SparkContext._gateway:
--> 319                 SparkContext._gateway = gateway or launch_gateway(conf)
    320                 SparkContext._jvm = SparkContext._gateway.jvm
    321 

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\java_gateway.py in launch_gateway(conf, popen_kwargs)
    103 
    104             if not os.path.isfile(conn_info_file):
--> 105                 raise Exception("Java gateway process exited before sending its port number")
    106 
    107             with open(conn_info_file, "rb") as info:

Exception: Java gateway process exited before sending its port number

我看到了另一个类似的问题,但可能情况是另一个,因为我已经尝试了这些解决方案,如:
-安排另一个派对 PYSPARK_SUBMIT_ARGS ,但我不知道我是否做错了。

os.environ['PYSPARK_SUBMIT_ARGS']= "--master spark://localhost:8888"

其他解决方案是:-设置路径 JAVA_HOME, SPARK_HOME (已经安装了)-安装Java8(不是10)
我已经花了几个小时尝试了,甚至是重新安装anaconda,因为我删除了一个环境。

bq8i3lrv

bq8i3lrv1#

经过一个星期的寻找不同的方法来解决异常显示,最后我找到了另一个教程,但这解决了我的问题,答案是Python是问题,相同的变量和路径是相同的。然后我直接在windows中安装了笔记本python(没有anaconda),现在问题解决了。

xtupzzrd

xtupzzrd2#

我同意。为我也工作过,在jyputer笔记本上,做!康达安装Pypspark(而之前我只做了!pip安装…)

相关问题