Spark点阵十进制精度

1zmg4dgp  于 8个月前  发布在  Apache
关注(0)|答案(3)|浏览(115)

我有一个框架:

val groupby = df.groupBy($"column1",$"Date")    
    .agg(sum("amount").as("amount"))
    .orderBy($"column1",desc("cob_date"))

当应用窗口函数添加新的列差时:

val windowspec= Window.partitionBy("column1").orderBy(desc("DATE"))

groupby.withColumn("diffrence" ,lead($"amount", 1,0).over(windowspec)).show()

+--------+------------+-----------+--------------------------+
| Column | Date       | Amount    | Difference               |
+--------+------------+-----------+--------------------------+
| A      | 3/31/2017  | 12345.45  | 3456.540000000000000000  |
+--------+------------+-----------+--------------------------+
| A      | 2/28/2017  | 3456.54   | 34289.430000000000000000 |
+--------+------------+-----------+--------------------------+
| A      | 1/31/2017  | 34289.43  | 45673.987000000000000000 |
+--------+------------+-----------+--------------------------+
| A      | 12/31/2016 | 45673.987 | 0.00E+00                 |
+--------+------------+-----------+--------------------------+

我得到的是十进制的,因为后面有零。当我为上面的字符串做printSchema()时,得到的是差分的数据类型:decimal(38,18)。有人能告诉我如何将数据类型改为decimal(38,2)或删除后面的零吗?

eeq64g8w

eeq64g8w1#

你可以像下面这样用特定的小数大小转换数据,

lead($"amount", 1,0).over(windowspec).cast(DataTypes.createDecimalType(32,2))
o7jaxewo

o7jaxewo2#

在纯SQL中,您可以使用众所周知的技术:

SELECT ceil(100 * column_name_double)/100 AS cost ...
cig3rfwq

cig3rfwq3#

from pyspark.sql.types import DecimalType
df=df.withColumn(column_name, df[column_name].cast(DecimalType(10,2)))

相关问题