如何用控制台格式的不同Dataframe编写相同的流?

dm7nw8vv  于 2021-06-06  发布在  Kafka
关注(0)|答案(0)|浏览(184)

因为我刚接触spark structure流媒体,所以在一个简单的场景中面临问题:
我正在尝试用两个不同的Dataframe编写一个流。

from pyspark.sql import functions as f
df = spark \
        .readStream \
        .format("kafka") \
        .option("kafka.bootstrap.servers", "localhost:9092") \
        .option("subscribe", "topic1") \
        .option("failOnDataLoss", False)\
        .option("startingOffsets", "earliest") \
        .load()

data1 = df.filter(f.col('status') == 'true')
data2 = df.filter(f.col('status') == 'false')
data2 = data2.select(df.id,f.struct(df.col1, df.col2, df.col3).alias('value')) 
data2 = data2.groupBy("id").agg(f.collect_set('value').alias('history'))
data1 = data1.writeStream.format("console").option("truncate", "False").trigger(processingTime='15 seconds').start()
data2 = data2.writeStream.format("console").option("truncate", "False").trigger(processingTime='15 seconds').start()

spark.streams.awaitAnyTermination()

我得到的错误如下:

Traceback (most recent call last):
File "/home/adarshbajpai/Downloads/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark     /sql/utils.py", line 63, in deco
 File "/home/adarshbajpai/Downloads/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
 py4j.protocol.Py4JJavaError: An error occurred while calling o186.start.
 : org.apache.spark.sql.AnalysisException: Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark;;
 Aggregate [customerid#93L], [customerid#93L, collect_set(hist_value#278, 0, 0) AS histleadstatus#284]
 +- Project [customerid#93L, named_struct(islaststatus, islaststatus#46, statusid, statusid#43, status, statusname#187, createdOn, statusCreatedDate#59, updatedOn, statusUpdatedDate#60) AS hist_value#278]
  +- Filter (islaststatus#46 = 0)

我想我不应该使用水印,因为我的流没有延迟和任何延迟。
请建议!提前谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题