聚合Dataframepyspark

s4chpxco  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(351)

im使用带Dataframe的spark 1.6.2
我想转换这个Dataframe

+---------+-------------+-----+-------+-------+-------+-------+--------+
|ID       |           P |index|xinf   |xup    |yinf   |ysup   |     M  |
+---------+-------------+-----+-------+-------+-------+-------+--------+
|        0|10279.9003906|   13|    0.3|    0.5|    2.5|    3.0|540928.0|
|        2|12024.2998047|   13|    0.3|    0.5|    2.5|    3.0|541278.0|
|        0|10748.7001953|   13|    0.3|    0.5|    2.5|    3.0|541243.0|
|        1|      10988.5|   13|    0.3|    0.5|    2.5|    3.0|540917.0|
+---------+-------------+-----+-------+-------+-------+-------+--------+

+---------+-------------+-----+-------+-------+-------+-------+--------+
|Id       |           P |index|xinf   |xup    |yinf   |ysup   |     M  |
+---------+-------------+-----+-------+-------+-------+-------+--------+
|        0|10514.3002929|   13|    0.3|    0.5|    2.5|    3.0|540928.0,541243.0|
|        2|12024.2998047|   13|    0.3|    0.5|    2.5|    3.0|541278.0|
|        1|      10988.5|   13|    0.3|    0.5|    2.5|    3.0|540917.0|
+---------+-------------+-----+-------+-------+-------+-------+--------+

所以,我想用id来减少,计算p行的平均值,并连接m行。但我不能用spark的agg函数。
你能帮帮我吗

5f0d552i

5f0d552i1#

你可以 groupByID 然后根据需要聚合每一列, mean 以及 concat 我会帮你的。

from pyspark.sql.functions import first, collect_list, mean

df.groupBy("ID").agg(mean("P"), first("index"), 
                     first("xinf"), first("xup"), 
                     first("yinf"), first("ysup"), 
                     collect_list("M"))

相关问题