Dataframe到数据集转换(scala)

xkftehaa  于 2021-06-06  发布在  Kafka
关注(0)|答案(1)|浏览(397)

我正在尝试将kafka消息值解压到case类示例中(我把信息放在另一边。)
此代码:

import ss.implicits._
    import org.apache.spark.sql.functions._

    val enc: Encoder[TextRecord] = Encoders.product[TextRecord]
    ss.udf.register("deserialize", (bytes: Array[Byte]) => {
      DefSer.deserialize(bytes).asInstanceOf[TextRecord] }
    )

    val inputStream = ss.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", conf.getString("bootstrap.servers"))
      .option("subscribe", topic)
      .option("startingOffsets", "earliest")
      .load()

    inputStream.printSchema

    val records = inputStream
        .selectExpr(s"deserialize(value) AS record")

    records.printSchema

    val rec2 = records.as(enc)

    rec2.printSchema

生成此输出:

root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

root
 |-- record: struct (nullable = true)
 |    |-- eventTime: timestamp (nullable = true)
 |    |-- lineLength: integer (nullable = false)
 |    |-- windDirection: float (nullable = false)
 |    |-- windSpeed: float (nullable = false)
 |    |-- gustSpeed: float (nullable = false)
 |    |-- waveHeight: float (nullable = false)
 |    |-- dominantWavePeriod: float (nullable = false)
 |    |-- averageWavePeriod: float (nullable = false)
 |    |-- mWaveDirection: float (nullable = false)
 |    |-- seaLevelPressure: float (nullable = false)
 |    |-- airTemp: float (nullable = false)
 |    |-- waterSurfaceTemp: float (nullable = false)
 |    |-- dewPointTemp: float (nullable = false)
 |    |-- visibility: float (nullable = false)
 |    |-- pressureTendency: float (nullable = false)
 |    |-- tide: float (nullable = false)

当我到Flume的时候

val debugOut = rec2.writeStream
      .format("console")
      .option("truncate", "false")
      .start()

    debugOut.awaitTermination()

catalyst抱怨:

Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`eventTime`' given input columns: [record];
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)

我试过很多方法来“拉起短信记录”,打电话 rec2.map(r=>r.getAs[TextRecord](0)) , explode("record") 等,但遇到 ClassCastExceptions .

uurv41yg

uurv41yg1#

最简单的方法是使用 map 功能

import ss.implicits._

val inputStream = ss.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", conf.getString("bootstrap.servers"))
      .option("subscribe", topic)
      .option("startingOffsets", "earliest")
      .load()

val records = inputStream.map(row => 
  DefSer.deserialize(row.getAs[Array[Byte]]("value")).asInstanceOf[TextRecord]
)
``` `records` 将直接成为 `Dataset[TextRecord]` .
另外,只要隐式导入sparksession,就不需要为case类提供编码器类,scala会隐式地为您提供它。

相关问题