我正在使用hortonworks工具套件,并尝试将来自kafka主题的json数据解析为Dataframe。然而,当我查询内存表时,dataframe的模式似乎是正确的,但是所有的值都是空的,我不知道为什么。
进入Kafka主题的json数据如下所示:
{"index":"0","Conrad":"Persevering system-worthy intranet","address":"8905 Robert Prairie\nJoefort, LA 41089","bs":"envisioneer web-enabled mindshare","city":"Davidland","date_time":"1977-06-26 06:12:48","email":"eric56@parker-robinson.com","paragraph":"Kristine Nash","randomdata":"Growth special factor bit only. Thing agent follow moment seat. Nothing agree that up view write include.","state":"1030.0"}
我的齐柏林飞艇笔记本上的代码是这样的:
%dep
z.load("org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1")
%pyspark
# Defining my schema
from pyspark.sql.types import StructType , StringType , LongType , IntegerType
schema = StructType().add("index", IntegerType()).add("Conrad", StringType()).add("address",StringType()).add("bs",StringType()).add("city",StringType()).add("date_time",LongType()).add("email",StringType()).add("name",StringType()).add("paragraph",StringType()).add("randomdata",IntegerType()).add("state",StringType())
# Read data from kafka topic
lines = spark.readStream.format("kafka").option("kafka.bootstrap.servers","x.x.x.x:2181").option("startingOffsets", "latest").option("subscribe","testdata").load().select(from_json(col("value").cast("string"), schema).alias("parsed_value"))
# Start the stream and query the in-memory table
query=lines.writeStream.format("memory").queryName("t10").start()
raw= spark.sql("select parsed_value.* from t10")
我目前正在显式定义模式,但我的最终目标是从hortonworks模式注册中心获得avro模式。如果有人也能教我怎么做就好了。
谢谢!
暂无答案!
目前还没有任何答案,快来回答吧!