如何使用scala case类Mapspark结构化流媒体中的kafka源

1sbrub3j  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(238)

我尝试在spark中使用结构化流媒体,因为它非常适合我的用例。然而,我似乎找不到一种方法来将Kafka传入的数据Map到case类中。根据官方文件,我能走多远。

import sparkSession.sqlContext.implicits._                          
val kafkaDF:DataFrame = sparkSession
                                          .readStream
                                          .format("kafka")
                                          .option("kafka.bootstrap.servers", bootstrapServers_CML)
                                          .option("subscribe", topics_ME)
                                          .option("startingOffsets", "latest")
                                          .load()
                                          .selectExpr("cast (value as string) as json") //Kakfa sends data in a specific schema (key, value, topic, offset, timestamp etc)    

val schema_ME = StructType(Seq(
    StructField("Parm1", StringType, true),
    StructField("Parm2", StringType, true),
    StructField("Parm3", TimestampType, true)))  

val mobEventDF:DataFrame = kafkaDF
                         .select(from_json($"json", schema_ME).as("mobEvent")) //Using a StructType to convert to application specific schema. Cant seem to use a case class for schema directly yet. Perhaps with later API??
                         .na.drop()

mobeventdf有这样一个模式

root
 |-- appEvent: struct (nullable = true)
 |    |-- Parm1: string (nullable = true)
 |    |-- Parm2: string (nullable = true)
 |    |-- Parm3: string (nullable = true)

有没有更好的办法?如何将其直接Map到下面的scala case类中?

case class ME(name: String, 
                 factory: String,
                 delay: Timestamp)
kfgdxczn

kfgdxczn1#

选择并重命名所有字段,然后调用 as 方法

kafkaDF.select($"mobEvent.*").toDF("name", "factory", "delay").as[ME]

相关问题