hdfs—flume总是在监视器日志中使用代理关闭挂钩时自动停止

3okqufwl  于 2021-06-04  发布在  Flume
关注(0)|答案(0)|浏览(230)

我在linux ubuntu上运行了一个flume,它运行得比较好,但是在运行一天后它总是会停止,下面是flume的配置:

nginx.channels=ch-spooling ch-tail
    nginx.sources=spooling-source tail-source
    nginx.sinks=hdfs-spooling kafka-tail

    nginx.channels.ch-spooling.type=file
    nginx.channels.ch-spooling.checkpointDir=/home/hadoop/flume/file-channel/ch-spooling/checkpoint
    nginx.channels.ch-spooling.dataDirs=/home/hadoop/flume/file-channel/ch-spooling/data
    nginx.channels.ch-spooling.capacity=1000
    nginx.channels.ch-spooling.transactionCapacity=100
    nginx.channels.ch-spooling.capacity=100000

    nginx.channels.ch-tail.type=file
    nginx.channels.ch-tail.checkpointDir=/home/hadoop/flume/file-channel/ch-tail/checkpoint
    nginx.channels.ch-tail.dataDirs=/home/hadoop/flume/file-channel/ch-tail/data
    nginx.channels.ch-tail.capacity=1000
    nginx.channels.ch-tail.transactionCapacity=100
    nginx.channels.ch-tail.capacity=100000

    nginx.sources.spooling-source.type=spooldir
    nginx.sources.spooling-source.channels=ch-spooling
    nginx.sources.spooling-source.spoolDir=/usr/local/nginx/logs/flume_logs
    nginx.sources.spooling-source.fileHeader=true
    nginx.sources.spooling-source.fileHeaderKey=file
    nginx.sources.spooling-source.basenameHeader=true
    nginx.sources.spooling-source.basenameHeaderKey=basename
    nginx.sources.spooling-source.deletePolicy=never

    nginx.sources.spooling-source.consumeOrder=oldest
    nginx.sources.spooling-source.recursiveDirectorySearch=false
    nginx.sources.spooling-source.batchSize=100
    nginx.sources.spooling-source.inputCharset=UTF-8

    nginx.sources.spooling-source.decodeErrorPolicy=IGNORE
    nginx.sources.spooling-source.selector.type=replicating
    nginx.sources.spooling-source.interceptors=i1 i2
    nginx.sources.spooling-source.interceptors.i1.type=timestamp
    nginx.sources.spooling-source.interceptors.i2.type=host
    nginx.sources.spooling-source.interceptors.i2.useIP=true
    nginx.sources.spooling-source.interceptors.i2.hostHeader=host

    nginx.sources.tail-source.type=TAILDIR
    nginx.sources.tail-source.channels=ch-tail
    nginx.sources.tail-source.filegroups=hnrapi hnrapierror
    nginx.sources.tail-source.filegroups.hnrapi=/usr/local/nginx/logs/hnr-api.access.log
    nginx.sources.tail-source.filegroups.hnrapierror=/usr/local/nginx/logs/error.log
    nginx.sources.tail-source.positionFile=/home/hadoop/flume/file-channel/ch-tail/taildir_position.json
    nginx.sources.tail-source.headers.hnrapi.topic=hnrapi
    nginx.sources.tail-source.headers.hnrapierror.topic=hnrapierror
    nginx.sources.tail-source.skipToEnd=true
    nginx.sources.tail-source.interceptors=i1 i2
    nginx.sources.tail-source.interceptors.i1.type=timestamp
    nginx.sources.tail-source.interceptors.i2.type=host
    nginx.sources.tail-source.interceptors.i2.useIP=true
    nginx.sources.tail-source.interceptors.i2.hostHeader=host

    nginx.sinks.hdfs-spooling.channel=ch-spooling
    nginx.sinks.hdfs-spooling.type=hdfs
    nginx.sinks.hdfs-spooling.hdfs.fileType=DataStream
    nginx.sinks.hdfs-spooling.hdfs.writeFormat=Text

    nginx.sinks.hdfs-spooling.hdfs.path=hdfs://namenode1:9000/flume/nginx/%Y-%m-%d

    nginx.sinks.hdfs-spooling.hdfs.filePrefix=%{basename}.[%{host}]
    nginx.sinks.hdfs-spooling.hdfs.fileSuffix=
    nginx.sinks.hdfs-spooling.hdfs.inUseSuffix=.tmp
    nginx.sinks.hdfs-spooling.hdfs.rollInterval=0
    nginx.sinks.hdfs-spooling.hdfs.rollSize=1073741824
    nginx.sinks.hdfs-spooling.hdfs.rollCount=0
    nginx.sinks.hdfs-spooling.hdfs.idleTimeout=60

    nginx.sinks.kafka-tail.channel=ch-tail
    nginx.sinks.kafka-tail.type=org.apache.flume.sink.kafka.KafkaSink
    nginx.sinks.kafka-tail.kafka.bootstrap.servers=192.47.180.63:9192,192.27.125.123:9192,192.27.124.96:9192
    nginx.sinks.kafka-tail.flumeBatchSize=32
    nginx.sinks.kafka-tail.kafka.producer.acks=1
    nginx.sinks.kafka-tail.useFlumeEventFormat=false

我用taildir和spooldir收集nginx日志,taildir有target到kafka,spooldir有target到hdfs。
以下是Flume监视器日志的一部分:
2017-03-06 18:17:54302(kafka producer网络线程| producer-1)[调试-org.apache.flume.sink.kafka.sinkcallback.oncompletion(kafkasink。java:456)]发送时间:13 2017-03-06 18:18:14,230(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:18:14,408(log backgroundworker ch tail)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.begincheckpoint(eventqueuebackingstorefile)。java:227)]启动/home/hadoop/flume/file channel/ch tail/checkpoint/checkpoint的检查点,要同步的元素=8 2017-03-06 18:18:14,415(log backgroundworker ch tail)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.checkpoint(eventqueuebackingstorefile)。java:252)]正在更新检查点元数据:logwriteorderid:1488782535982,queuesize:0,queuehead:1642 2017-03-06 18:18:14,420(log backgroundworker ch tail)[info-org.apache.flume.channel.file.log.writecheckpoint(日志。java:1052)]更新了文件的检查点:/home/hadoop/flume/file channel/ch tail/data/log-16位置:391719 logwriteorderid:1488782535982 2017-03-06 18:18:14,420(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.log.removeoldlogs(日志。java:1115)]当前使用的文件:[16]2017-03-06 18:18:14,484(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:18:44,230(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]不需要检查点2017-03-06 18:18:44420(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:18:44,484(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:19:14,230(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]不需要检查点2017-03-06 18:19:14421(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:19:14,485(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:19:44,231(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]不需要检查点2017-03-06 18:19:44421(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:19:44,485(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:19:54,577(pollablesourcerunner taildirsource tail source)[info-org.apache.flume.source.taildir.taildirsource.closetailfiles(taildirsource)。java:288)]关闭文件:/usr/local/nginx/logs/hnr-api.access.log,inode:2237741,pos:3183 2017-03-06 18:19:59,307(sinkrunner pollingrunner defaultsinkprocessor)[debug-org.apache.flume.sink.kafka.kafkasink.process(kafkasink)。java:187)]事件#0 2017-03-06 18:19:59310(kafka producer网络线程| producer-1)[调试-org.apache.flume.sink.kafka.sinkcallback.oncompletion(kafkasink。java:455)]确认的消息partition:1 ofset:122 2017-03-06 18:19:59,310(kafka producer网络线程| producer-1)[debug-org.apache.flume.sink.kafka.sinkcallback.oncompletion(kafkasink)。java:456)]发送时间:3 2017-03-06 18:20:14,231(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:20:14,421(log backgroundworker ch tail)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.begincheckpoint(eventqueuebackingstorefile)。java:227)]为/home/hadoop/flume/file channel/ch tail/checkpoint/checkpoint启动检查点,要同步的元素=1 2017-03-06 18:20:14,427(log backgroundworker ch tail)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.checkpoint(eventqueuebackingstorefile)。java:252)]正在更新检查点元数据:logwriteorderid:1488782535987,queuesize:0,queuehead:1642 2017-03-06 18:20:14,432(log backgroundworker ch tail)[info-org.apache.flume.channel.file.log.writecheckpoint(日志。java:1052)]更新了文件的检查点:/home/hadoop/flume/file channel/ch tail/data/log-16位置:392302 logwriteorderid:1488782535987 2017-03-06 18:20:14,432(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.log.removeoldlogs(日志。java:1115)]当前使用的文件:[16]2017-03-06 18:20:14,485(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:20:44,231(log backgroundworker ch spooling)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]不需要检查点2017-03-06 18:20:44433(log backgroundworker ch tail)[debug-org.apache.flume.channel.file.flumeeventqueue.checkpoint(flumeeventqueue)。java:138)]无需检查站2017-03-06 18:20:44,485(conf-file-poller-0)[debug-org.apache.flume.node.pollingpropertiesfileconfigurationprovider$filewatcherrunnable.run(pollingpropertiesfileconfigurationprovider)。java:127)]正在检查文件:/home/hadoop/flume/conf/flume-conf.properties的更改2017-03-06 18:20:46,224(代理关闭钩子)[info-org.apache.flume.lifecycle.lifecyclesupervisor.stop(lifecyclesupervisor。java:78)]停止生命周期主管13 2017-03-06 18:20:46,227(代理关闭钩子)[info-org.apache.flume.channel.file.filechannel.stop(filechannel)。java:324)]正在停止filechannel ch tail{datadirs:[/home/hadoop/flume/file channel/ch tail/data]}。。。2017-03-06 18:20:46227(代理关闭挂钩)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.begincheckpoint(eventqueuebackingstorefile)。java:227)]启动/home/hadoop/flume/file channel/ch tail/checkpoint/checkpoint的检查点,要同步的元素=0 2017-03-06 18:20:46,233(代理关闭挂钩)[info-org.apache.flume.channel.file.eventqueuebackingstorefile.checkpoint(eventqueuebackingstorefile)。java:252)]正在更新检查点元数据:logwriteorderid:1488782535988,queuesize:0,queuehead:1642 2017-03-06 18:20:46,238(代理关闭钩子)[info-org.apache.flume.channel.file.log.writecheckpoint(日志。java:1052)]更新了文件的检查点:/home/hadoop/flume/file channel/ch tail/data/log-16位置:392302 logwriteorderid:1488782535988正在尝试关闭后台工作进程。2017-03-06 18:20:46238(代理关闭挂钩)[info-org.apache.flume.channel.file.log.shutdownworker(日志。java:868)]正在尝试关闭后台工作进程。2017-03-06 18:20:46238(代理关闭挂钩)[info-org.apache.flume.channel.file.logfile$writer.close(日志文件)。java:384)]关闭/home/hadoop/flume/file channel/ch tail/data/log-16 2017-03-06 18:20:46,238(代理关闭钩子)[info-org.apache.flume.channel.file.logfile$randomreader.close(日志文件)。java:520)]关闭randomreader/home/hadoop/flume/file channel/ch tail/data/log-16 2017-03-06 18:20:46,243(代理关闭钩子)[info-org.apache.flume.channel.file.logfile$randomreader.close(日志文件)。java:520)]关闭randomreader/home/hadoop/flume/file channel/ch tail/data/log-15 2017-03-06 18:20:46,249(代理关闭钩子)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:149)]组件类型:通道,名称:ch tail stopped 2017-03-06 18:20:46,249(代理关闭钩子)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:155)]类型:channel,名称:ch tail的关闭度量。channel.start.time==1488782534266 2017-03-06 18:20:46249(代理关闭钩子)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:161)]类型:channel,名称:ch tail的关闭度量。channel.stop.time==1488795646249 2017-03-06 18:20:46249(代理关闭挂钩)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:177)]类型:channel,名称:ch tail的关闭度量。channel.capacity==100000 2017-03-06 18:20:46249(代理关闭挂钩)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:177)]类型:channel,名称:ch tail的关闭度量。channel.current.size==0 2017-03-06 18:20:46249(代理关闭挂钩)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(monitoredcountergroup)。java:177)]类型:channel,名称:ch tail的关闭度量。channel.event.put.attempt==834 2017-03-06 18:20:46249(代理关闭挂钩)[info-org.apache.flume.instrumentation.monitoredcountergroup.stop(监视计数器组)。java:177)]类型:channel,名称:ch tail的关闭度量。channel.event.put.success==834 2017-03-06 18:20:46249(代理关闭挂钩)[info-org.apache.flume.instrumentation.monit

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题