pyspark-在独立集群模式下运行时访问udf中的广播变量时出错

mefy6pfw  于 2021-05-18  发布在  Spark
关注(0)|答案(0)|浏览(355)
@f.pandas_udf(returnType= DoubleType())
def square(r : pd.Series) -> pd.Series:
    print('In pandas Udf square')
    offset_value = offset.value
    return (r * r ) + 10

if __name__ == "__main__":
    spark = SparkSession.builder.appName("Spark").getOrCreate()

    sc = spark.sparkContext

    spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
    spark.conf.set("spark.sql.execution.arrow.enabled", "true")

    offset = sc.broadcast(10)

    x = pd.Series(range(0,100))

    df = spark.createDataFrame(pd.DataFrame(x, columns=["x"]))
    df = df.withColumn('sq',square(df.x)).withColumn('sqsq', square(f.col('sq')))
    start_time = datetime.datetime.now()   

    df.show()

    offset.unpersist()
    offset.destroy()

    spark.stop()

如果在local模式submit.cmd--master local[*]test.py下运行pyspark submit命令,上面的代码运行得很好
同样的代码,如果我尝试在独立集群模式下运行,即submit.cmd--masterspark://xx.xx.0.24:7077 test.py访问udf中的广播变量时出错
java.io.ioexception:复制到后,未能删除原始文件“c:\users\appdata\local\temp\spark-bf6b4553-f30f-4e4a-a7f7-ef117329985c\executor-3922c28f-ed1e-4348-baa4-4ed08e042b76\spark-b59e518c-a20a-4a11-b96b-b7657b1c79ea\broadcast65377915887215339”'c:\users\appdata\local\temp\spark-bf6b4553-f30f-4e4a-a7f7-ef117329985c\executor-3922c28f-ed1e-4348-baa4-4ed08e042b76\blockmgr-ee27f0f0-ee8b-41ea-86d6-8f923845391e\37\broadcast\u python'位于org.apache.commons.io.fileutils.movefile(fileutils)。java:2835)在org.apache.spark.storage.diskstore.movefiletoblock(diskstore。scala:133)在org.apache.spark.storage.blockmanager$tempfilebasedblockstoreupdater.savetodiskstore(blockmanager。scala:424)在org.apache.spark.storage.blockmanager$blockstoreupdater.$anonfun$save$1(blockmanager。scala:343)在org.apache.spark.storage.blockmanager.org$apache$spark$storage$blockmanager$$doput(blockmanager)。scala:1298)
无需访问udf中的广播变量,此代码工作正常。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题