在PySpark中解码字符串URL列

lf5gs5x2  于 9个月前  发布在  Spark
关注(0)|答案(3)|浏览(1316)

我使用Python 2.7和Spark 2.2.0。我在PySpark中创建了一个 Dataframe ,它具有字符串列类型并包含URL。

df = spark.createDataFrame([('example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0',)], ['url'])
df.show(1, False)

+-------------------------------------------------------------------------------------------------------+
|url                                                                                                    |
+-------------------------------------------------------------------------------------------------------+
|example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0|
+-------------------------------------------------------------------------------------------------------+

为了解码列中的所有URL,我尝试使用urllib。我创建了udf。我是这样使用它的:

from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
decode_url = udf(lambda val: (urllib.unquote(val).decode('utf8'), StringType()))

在我的列上应用udf之后,我期待着这样:

+---------------------------------+
|url                              |
+---------------------------------+
|example.com?title=правовая+защита|
+---------------------------------+

但我得到了一个错误:

UnicodeEncodeError: 'ascii' codec can't encode characters in position 18-33: ordinal not in range(128)

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:144)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:87)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

如果我从列中取出一些url并尝试单独解码,它工作正常:

import urllib 
url='example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0'
print urllib.unquote(url).decode('utf8')
example.com?title=правовая+защита
oyxsuwqo

oyxsuwqo1#

在引擎盖下似乎有一些奇怪的编码。你为什么不自己显式编码呢?

>>> decode_udf= udf(lambda val: urllib.unquote(val.encode('utf-8')).decode('utf-8'), StringType())
>>> df.withColumn('decoded_url', decode_udf('url')).show(truncate=False)
+-------------------------------------------------------------------------------------------------------+---------------------------------+
|url                                                                                                    |decoded_url                      |
+-------------------------------------------------------------------------------------------------------+---------------------------------+
|example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0|example.com?title=правовая+защита|
+-------------------------------------------------------------------------------------------------------+---------------------------------+
``
zpjtge22

zpjtge222#

  • 如果你想在spark sql中这样做,你可以这样做:*
    注:编码字段名为dctr
    输入:im_segments%3Debejz4gv%2Ck1GmZLwg%2C8zY92P4g%2Cka6ee4eb%2CgPKlZXXb%2CqkVvpGk9%2Cky1ee4Dk%2CgvqKoW0b%2CgO5l6Zrk%2CgO5lGpdk%2CxkD6AYgm%2CgO5rENWk%2Cg7VrxvDb
    预期输出:im_segments=ebejz4gv,k1GmZLwg,8zY92P4g,ka6ee4eb,gPKlZXXb,qkVvpGk9,ky1ee4Dk,gvqKoW0b,gO5l6Zrk,gO5lGpdk,xkD6AYgm,gO5rENWk,g7VrxvDb
    答案:
select distinct reflect('java.net.URLDecoder','decode', dctr , 'utf-8') as dctr from table
mjqavswn

mjqavswn3#

Spark 3.5+

F.url_decode('url')

Spark 3.4+

F.expr("url_decode(url)")

示例如下:

from pyspark.sql import functions as F

df.withColumn('url', F.url_decode('url')).show(1, 0)
# +---------------------------------+
# |url                              |
# +---------------------------------+
# |example.com?title=правовая защита|
# +---------------------------------+

df.withColumn('url', F.expr("url_decode(url)")).show(1, 0)
# +---------------------------------+
# |url                              |
# +---------------------------------+
# |example.com?title=правовая защита|
# +---------------------------------+

相关问题