我的目标是不断地将传入的parquet文件放入delta lake,进行查询,并将结果放入restapi。所有文件都在s3存储桶中。
//listen for changes
val df = spark.readStream().parquet("s3a://myBucket/folder")
//write changes to delta lake
df.writeStream()
.format("delta")
.option("checkpointLocation", "s3a://myBucket-processed/checkpoint")
.start("s3a://myBucket-processed/")
.awaitTermination() //this call lives in another thread (because it's blocking)
//this is a bad example
val query = df.select(convertedColumnNames)
query.show()
//another bad example:
spark.readStream().format("delta").load("s3a://myBucket-processed/").select(convertedColumnNames).show()
//org.apache.spark.sql.AnalysisException: Queries with streaming sources must be executed with writeStream.start();;
我如何从三角洲湖得到过滤后的数据?
1条答案
按热度按时间balp4ylt1#
你试过用foreachbatch吗?
它将所有类似批处理的特性引入流式处理,您还可以在某种程度上控制要写入delta-lake的文件数量。