pyspark 为spark dataframe中两列的一对值的组合创建唯一的id

4xrmg8kj  于 2023-06-05  发布在  Spark
关注(0)|答案(3)|浏览(241)

我有一个六列的spark Dataframe ,比如(col1,col2,... col6)。我想为“col1”和“col2”中的每个值组合创建一个唯一的id,并将其添加到dataframe中。有人可以帮助我一些pyspark代码如何做到这一点?

rqdpfwrv

rqdpfwrv1#

您可以使用monotonically_increasing_id(pyspark >1.6)或monotonicallyIncreasingId(pyspark <1.6)来实现它。

>>> from pyspark.sql.functions import monotonically_increasing_id
>>> rdd=sc.parallelize([[12,23,3,4,5,6],[12,23,56,67,89,20],[12,23,0,0,0,0],[12,2,12,12,12,23],[1,2,3,4,56,7],[1,2,3,4,56,7]])
>>> df = rdd.toDF(['col_1','col_2','col_3','col_4','col_5','col_6'])
>>> df.show()
+-----+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|col_6|
+-----+-----+-----+-----+-----+-----+
|   12|   23|    3|    4|    5|    6|
|   12|   23|   56|   67|   89|   20|
|   12|   23|    0|    0|    0|    0|
|   12|    2|   12|   12|   12|   23|
|    1|    2|    3|    4|   56|    7|
|    1|    2|    3|    4|   56|    7|
+-----+-----+-----+-----+-----+-----+

>>> df_1=df.groupBy(df.col_1,df.col_2).count().withColumn("id", monotonically_increasing_id()).select(['col_1','col_2','id'])
>>> df_1.show()
+-----+-----+-------------+
|col_1|col_2|           id|
+-----+-----+-------------+
|   12|   23|  34359738368|
|    1|    2|1434519076864|
|   12|    2|1554778161152|
+-----+-----+-------------+

>>> df.join(df_1,(df.col_1==df_1.col_1) & (df.col_2==df_1.col_2)).drop(df_1.col_1).drop(df_1.col_2).show()
+-----+-----+-----+-----+-----+-----+-------------+
|col_3|col_4|col_5|col_6|col_1|col_2|           id|
+-----+-----+-----+-----+-----+-----+-------------+
|    3|    4|    5|    6|   12|   23|  34359738368|
|   56|   67|   89|   20|   12|   23|  34359738368|
|    0|    0|    0|    0|   12|   23|  34359738368|
|    3|    4|   56|    7|    1|    2|1434519076864|
|    3|    4|   56|    7|    1|    2|1434519076864|
|   12|   12|   12|   23|   12|    2|1554778161152|
+-----+-----+-----+-----+-----+-----+-------------+
k3fezbri

k3fezbri2#

如果你真的需要从col1和col2生成唯一的ID,你也可以利用Spark的sha2函数创建一个哈希值。
首先让我们生成一些虚拟数据:

from random import randint

max_range = 10
df1 = spark.createDataFrame(
            [(x, x * randint(1, max_range), x * 10 * randint(1, max_range)) for x in range(1, max_range)], 
            ['C1', 'C2', 'C3'])

>>> df1.show()
+---+---+---+
| C1| C2| C3|
+---+---+---+
|  1|  1| 60|
|  2| 14|180|
|  3| 21|270|
|  4| 16|360|
|  5| 35|250|
|  6| 30|480|
|  7| 28|210|
|  8| 80|320|
|  9| 45|360|
+---+---+---+

然后用下面的代码从列C2和C3创建一个新的uid列:

from pyspark.sql.functions import col, sha2, concat

df1.withColumn("uid", sha2(concat(col("C2"), col("C3")), 256)).show(10, False)

和输出:

+---+---+---+--------------------+
| C1| C2| C3|                 uid|
+---+---+---+--------------------+
|  1|  1| 60|a512db2741cd20693...|
|  2| 14|180|2f6543dc6c0e06e4a...|
|  3| 21|270|bd3c65ddde4c6f733...|
|  4| 16|360|c7a1e8c59fc9dcc21...|
|  5| 35|250|cba1aeb7a72d9ae27...|
|  6| 30|480|ad7352ff8927cf790...|
|  7| 28|210|ea7bc25aa7cd3503f...|
|  8| 80|320|02e1d953517339552...|
|  9| 45|360|b485cf8f710a65755...|
+---+---+---+--------------------+
ufj5ltwl

ufj5ltwl3#

您可以使用rank函数。

rdd=sc.parallelize([[12,23,3,4,5,6],[12,23,56,67,89,20],[12,23,0,0,0,0],[12,2,12,12,12,23],[1,2,3,4,56,7],[1,2,3,4,56,7]])
df = rdd.toDF(['col_1','col_2','col_3','col_4','col_5','col_6'])
df.show()
+-----+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|col_6|
+-----+-----+-----+-----+-----+-----+
|   12|   23|    3|    4|    5|    6|
|   12|   23|   56|   67|   89|   20|
|   12|   23|    0|    0|    0|    0|
|   12|    2|   12|   12|   12|   23|
|    1|    2|    3|    4|   56|    7|
|    1|    2|    3|    4|   56|    7|
+-----+-----+-----+-----+-----+-----+

要生成唯一的id,请在Window的orderby函数中传递列列表

from pyspark.sql.functions import rank
df.withColumn("uid", rank().over(Window.orderBy("col_1", "col_2"))).show()

+-----+-----+-----+-----+-----+-----+---+
|col_1|col_2|col_3|col_4|col_5|col_6|uid|
+-----+-----+-----+-----+-----+-----+---+
|1    |2    |3    |4    |56   |7    |1  |
|1    |2    |3    |4    |56   |7    |1  |
|12   |2    |12   |12   |12   |23   |3  |
|12   |23   |56   |67   |89   |20   |4  |
|12   |23   |3    |4    |5    |6    |4  |
|12   |23   |0    |0    |0    |0    |4  |
+-----+-----+-----+-----+-----+-----+---+

相关问题