我试图从嵌套的pysparkDataframe生成json字符串,但丢失了键值。我的初始数据集类似于以下内容:
data = [
{"foo": [1, 2], "bar": [4, 5], "buzz": [7, 8]},
{"foo": [1], "bar": [4], "buzz": [7]},
{"foo": [1, 2, 3], "bar": [4, 5, 6], "buzz": [7, 8, 9]},
]
df = spark.read.json(sc.parallelize(data))
df.show()
## +---------+---------+---------+
## | bar| buzz| foo|
## +---------+---------+---------+
## | [4, 5]| [7, 8]| [1, 2]|
## | [4]| [7]| [1]|
## |[4, 5, 6]|[7, 8, 9]|[1, 2, 3]|
## +---------+---------+---------+
然后使用数组将每个列压缩到一起
df_zipped = (
df
.withColumn(
"zipped",
F.arrays_zip(
F.col("foo"),
F.col("bar"),
F.col("buzz"),
)
)
)
df_zipped.printSchema()
root
|-- bar: array (nullable = true)
| |-- element: long (containsNull = true)
|-- buzz: array (nullable = true)
| |-- element: long (containsNull = true)
|-- foo: array (nullable = true)
| |-- element: long (containsNull = true)
|-- zipped: array (nullable = true)
| |-- element: struct (containsNull = false)
| | |-- foo: long (nullable = true)
| | |-- bar: long (nullable = true)
| | |-- buzz: long (nullable = true)
问题是如何在zipped数组中使用json。它会丢失foo、bar和buzz键值,而是将这些键保存为元素索引
(
df_zipped
.withColumn("zipped", F.to_json("zipped"))
.select("zipped")
.show(truncate=False)
)
+-------------------------------------------------------------+
|zipped |
+-------------------------------------------------------------+
|[{"0":1,"1":4,"2":7},{"0":2,"1":5,"2":8}] |
|[{"0":1,"1":4,"2":7}] |
|[{"0":1,"1":4,"2":7},{"0":2,"1":5,"2":8},{"0":3,"1":6,"2":9}]|
+-------------------------------------------------------------+
如何保持“bar”、“buzz”和“foo”而不是0、1、2?
2条答案
按热度按时间7gyucuyw1#
手动指定模式也有效:对于foo、bar和buzz字段,元素顶部的数组必须已经命名,而不是在实际的数据字段本身
然后手动定义并强制转换到架构:
这将产生所需的解决方案:
注意:在模式中转换为longtype不起作用
5hcedyr02#
这不是一个超级漂亮的答案(因为您必须显式地指定键),但比我在注解中给出的更好。
使用
transform
与map
: