jvm 与工作节点(使用特里诺)对话时遇到太多错误

voj3qocg  于 7个月前  发布在  其他
关注(0)|答案(1)|浏览(48)

在运行单个大型查询时遇到此问题。我们能在这个错误发生之前终止这样的查询吗?io.trino.operator.PageTransportTimeoutException: Encountered too many errors talking to a worker node. The node may have crashed or be under too much load. This is probably a transient issue, so please retry your query in a few minutes. (http://172.22.66.206:8889/v1/task/20230727_083615_00032_edi7s.0.0.0/results/0/0 - 30 failures, failure duration 302.86s, total failed request time 312.86s)
3节点群集m6g.16xlarge(协调器和2个工作器)

node-scheduler.include-coordinator=false
discovery.uri=http://ip-172-22-69-150.ec2.internal:8889
http-server.threads.max=500
sink.max-buffer-size=1GB
query.max-memory=3000GB
query.max-memory-per-node=60GB
query.max-history=40
query.min-expire-age=30m
query.client.timeout=30m
query.stage-count-warning-threshold=100
query.max-stage-count=150
http-server.http.port=8889
http-server.log.path=/var/log/trino/http-request.log
http-server.log.max-size=67108864B
http-server.log.max-history=5
log.max-size=268435456B
jmx.rmiregistry.port = 9080
jmx.rmiserver.port = 9081
node-scheduler.max-splits-per-node = 200
experimental.query-max-spill-per-node = 50GB
graceful-shutdown-timeout = 3600s
task.concurrency = 16
query.execution-policy = phased
experimental.max-spill-per-node = 100GB
query.max-concurrent-queries = 20
query.max-total-memory = 5000GB
qyswt5oh

qyswt5oh1#

我在jvm配置中有以下标志。-XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError=kill-9% p
因此,每当存在OOM时,进程就会将其堆转储到磁盘上并杀死进程。工作节点的磁盘利用率太高(>95%),导致特里诺进程无法启动。
环顾四周后,我认为OOM问题是由于这个问题:-https://bugs.openjdk.org/browse/JDK-8293861

为了解决这个问题,我添加了以下jvm props -XX:+UnlockDiagnosticVMOptions -XX:-G1解锁GC
这可以防止进程由于GC而进入OOM

相关问题