spark结构化流式处理中获取总批处理时间的度量

yrefmtwq  于 2021-05-24  发布在  Spark
关注(0)|答案(0)|浏览(258)

在spark structured streaming中,spark默认打印度量,但根据文档triggerexecution必须是执行整个批处理所花费的总时间。

triggerExecution = batch_time = (addBatch + getBatch + getEndOffset + queryPlanning  + setOffsetRange + triggerExecution + walCommit)

事实并非如此,总批处理时间也不完全等于触发器执行时间。批处理的总执行时间与

batch_time = triggerExecution + addBatch

韵律学

"timestamp" : "2020-10-06T09:45:41.721Z",
  "batchId" : 12609,
  "numInputRows" : 199800,
  "inputRowsPerSecond" : 651.0624570275969,
  "processedRowsPerSecond" : 517.4758161639968,
  "durationMs" : {
    "addBatch" : 357041,
    "getBatch" : 0,
    "getEndOffset" : 1,
    "queryPlanning" : 29,
    "setOffsetRange" : 3012,
    "triggerExecution" : 386105,
    "walCommit" : 12949
  }

参考
https://www.waitingforcode.com/apache-spark-structured-streaming/query-metrics-apache-spark-structured-streaming/read
https://jaceklaskowski.gitbooks.io/spark-structured-streaming/content/spark-sql-streaming-progressreporter.html

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题