无法将log函数应用于pysparkDataframe

7vhp5slm  于 2021-05-16  发布在  Spark
关注(0)|答案(1)|浏览(414)

因此,我有一个大的数据集(大约1 tb+),其中我必须执行许多操作,为此我考虑使用pyspark进行更快的处理。以下是我的输入:

import numpy as np
import pandas as pd

try:
    import pyspark
    from pyspark import SparkContext, SparkConf
    from pyspark.sql import SparkSession, SQLContext
except ImportError as e:
    raise ImportError('PySpark is not Configured')

print(f"PySpark Version : {pyspark.__version__}")

# Creating a Spark-Context

sc = SparkContext.getOrCreate(SparkConf().setMaster('local[*]'))

# Spark Builder

spark = SparkSession.builder \
            .appName('MBLSRProcessor') \
            .config('spark.executor.memory', '10GB') \
            .getOrCreate()

# SQL Context - for SQL Query Executions

sqlContext = SQLContext(sc)

>> PySpark Version : 2.4.7

现在,我想申请 log10 两列函数-对于演示,请考虑以下数据:

data = spark.createDataFrame(pd.DataFrame({
    "A" : [1, 2, 3, 4, 5],
    "B" : [4, 3, 6, 1, 8]
}))

data.head(5)
>> [Row(A=1, B=4), Row(A=2, B=3), Row(A=3, B=6), Row(A=4, B=1), Row(A=5, B=8)]

这就是我的要求: log10(A + B) 即。 log10(6 + 4) = 1 为此我做了这样一个函数:

def add(a, b):
    # this is for demonstration
    return np.sum([a, b])

data = data.withColumn("ADD", add(data.A, data.B))
data.head(5)

>> [Row(A=1, B=4, ADD=5), Row(A=2, B=3, ADD=5), Row(A=3, B=6, ADD=9), Row(A=4, B=1, ADD=5), Row(A=5, B=8, ADD=13)]

但是,我不能为你做同样的事 np.log10 :

def np_log(a, b):
    # actual function
    return np.log10(np.sum([a, b]))

data = data.withColumn("LOG", np_log(data.A, data.B))
data.head(5)

TypeError                                 Traceback (most recent call last)
<ipython-input-13-a5726b6c7dc2> in <module>
----> 1 data = data.withColumn("LOG", np_log(data.A, data.B))
      2 data.head(5)

<ipython-input-12-0e020707faae> in np_log(a, b)
      1 def np_log(a, b):
----> 2     return np.log10(np.sum([a, b]))

TypeError: loop of ufunc does not support argument 0 of type Column which has no callable log10 method
ve7v8dk2

ve7v8dk21#

最好的方法是使用本机spark函数:

import pyspark.sql.functions as F
import pandas as pd

data = spark.createDataFrame(pd.DataFrame({
    "A" : [1, 2, 3, 4, 5],
    "B" : [4, 3, 6, 1, 8]
}))

data = data.withColumn("LOG", F.log10(F.col('A') + F.col('B')))

但如果您愿意,也可以使用自定义项:

import pyspark.sql.functions as F
from pyspark.sql.types import FloatType
import numpy as np
import pandas as pd

data = spark.createDataFrame(pd.DataFrame({
    "A" : [1, 2, 3, 4, 5],
    "B" : [4, 3, 6, 1, 8]
}))

def udf_np_log(a, b):
    # actual function
    return float(np.log10(np.sum([a, b])))

np_log = F.udf(udf_np_log, FloatType())

data = data.withColumn("LOG", np_log(data.A, data.B))

+---+---+---------+
|  A|  B|      LOG|
+---+---+---------+
|  1|  4|  0.69897|
|  2|  3|  0.69897|
|  3|  6|0.9542425|
|  4|  1|  0.69897|
|  5|  8|1.1139433|
+---+---+---------+

有趣的是,它适用于 np.sum 没有自定义项,因为我想 np.sum 只是打电话给 + 运算符,该运算符对sparkDataframe列有效。

相关问题