Kafka主题不断滚动新的日志段,没有覆盖,日志文件很小

x3naxklr  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(297)

我有一个kafka主题,当在代理配置中将log.segment.bytes设置为512兆字节时,它会不断滚动新的日志段。大多数日志文件平均为5-10千字节。
如果我看这个主题,我看不到任何覆盖。即使我创建覆盖并将segment.bytes设置为whatever,它仍然会遵循相同的行为。
我对此有点困惑。下一步怎么办?

root@utilitypod-985642408:/opt/kafka/bin# ./kafka-topics.sh --zookeeper 
zookeeper:2181 --describe --topic dev.com.redacted.redacted.services.redacted.priceStream.notification
Topic:dev.com.redacted.redacted.services.redacted.priceStream.notification   PartitionCount:3        ReplicationFactor:3     Configs:segment.bytes=536870912,segment.index.bytes=53687091,flush.messages=20000,flush.ms=600000
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 0    Leader: 1       Replicas: 1,2,0 Isr: 2,1,0
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 1    Leader: 2       Replicas: 2,0,1 Isr: 0,2,1
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 2    Leader: 0       Replicas: 0,1,2 Isr: 1,0,2

这是我的kafka代理(在k8s集群中运行,但这不重要)配置:

log.dirs=/var/lib/kafka/data/topics
num.partitions=3
default.replication.factor=3
min.insync.replicas=2
auto.create.topics.enable=true
num.recovery.threads.per.data.dir=4

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.

# init#broker.id=#init#

# init#broker.rack=#init#

# listeners=PLAINTEXT://:9092

listeners=OUTSIDE://:9094,PLAINTEXT://:9092

# init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092

listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
inter.broker.listener.name=PLAINTEXT
num.network.threads=2
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
queued.max.requests=16
message.max.bytes=1000000
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
log.flush.interval.messages=20000
log.retention.hours=168
log.segment.bytes=536870912
log.flush.scheduler.interval.ms=2000
log.cleaner.enable=false
log.retention.check.interval.ms=60000
zookeeper.connect=zookeeper:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
u2nhd7ah

u2nhd7ah1#

有没有可能您发送的记录的时间戳是过去的,超出了 log.retention.hours (或 .ms 等)设置?
如果是这样,您的记录几乎会立即被删除。正如您所描述的,将滚动段。最后一个偏移量将保留,但它将等于结束偏移量——这意味着日志将为空。

相关问题