汇合kafkarest错误服务器意外死亡至少需要配置bootstrap.servers或zookeeper.connect之一

rmbxnbpk  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(418)

我通过合流平台管理Kafka。我已经按照这里记录的步骤做了,https://docs.confluent.io/2.0.0/quickstart.html#quickstart
启动zookeeper,

$ sudo ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

从Kafka开始,

$ sudo ./bin/kafka-server-start ./etc/kafka/server.properties

开始 schema-registry 命令,

$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

一切正常。
接下来我要运行rest代理命令,如这里所述,https://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#quickstart

$ sudo bin/kafka-rest-start

但此命令失败,并出现以下错误(错误服务器意外终止:(io.confluent.kafkarest.kafk)arestmain:63)java.lang.runtimeexception:至少需要配置bootstrap.servers或zookeeper.connect中的一个)。
一切正常。我不明白为什么我会犯这个错误,你能帮我解决这个问题吗?

ESDGH-C02K648W:confluent-4.0.0 user$ sudo bin/kafka-rest-start
[2018-01-09 14:44:06,922] INFO KafkaRestConfig values: 
    metric.reporters = []
    client.security.protocol = PLAINTEXT
    bootstrap.servers = 
    response.mediatype.default = application/vnd.kafka.v1+json
    authentication.realm = 
    ssl.keystore.type = JKS
    metrics.jmx.prefix = kafka.rest
    ssl.truststore.password = [hidden]
    id = 
    host.name = 
    consumer.request.max.bytes = 67108864
    client.ssl.truststore.location = 
    ssl.endpoint.identification.algorithm = 
    compression.enable = false
    client.zk.session.timeout.ms = 30000
    client.ssl.keystore.type = JKS
    client.ssl.cipher.suites = 
    client.ssl.keymanager.algorithm = SunX509
    client.ssl.protocol = TLS
    response.mediatype.preferred = [application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json]
    client.sasl.kerberos.ticket.renew.window.factor = 0.8
    ssl.truststore.type = JKS
    consumer.iterator.backoff.ms = 50
    access.control.allow.origin = 
    ssl.truststore.location = 
    ssl.keystore.password = [hidden]
    zookeeper.connect = 
    port = 8082
    client.ssl.keystore.password = [hidden]
    client.ssl.provider = 
    client.init.timeout.ms = 60000
    simpleconsumer.pool.size.max = 25
    simpleconsumer.pool.timeout.ms = 1000
    ssl.client.auth = false
    consumer.iterator.timeout.ms = 1
    client.sasl.kerberos.service.name = 
    ssl.trustmanager.algorithm = 
    authentication.method = NONE
    schema.registry.url = http://localhost:8081
    client.ssl.truststore.type = JKS
    request.logger.name = io.confluent.rest-utils.requests
    ssl.key.password = [hidden]
    client.sasl.kerberos.ticket.renew.jitter = 0.05
    client.ssl.endpoint.identification.algorithm = 
    authentication.roles = [*]
    client.ssl.trustmanager.algorithm = PKIX
    metrics.num.samples = 2
    consumer.threads = 1
    ssl.protocol = TLS
    client.ssl.keystore.location = 
    debug = false
    listeners = []
    ssl.provider = 
    ssl.enabled.protocols = []
    client.sasl.kerberos.min.time.before.relogin = 60000
    producer.threads = 5
    shutdown.graceful.ms = 1000
    ssl.keystore.location = 
    consumer.request.timeout.ms = 1000
    ssl.cipher.suites = []
    client.timeout.ms = 500
    consumer.instance.timeout.ms = 300000
    client.sasl.kerberos.kinit.cmd = /usr/bin/kinit
    client.ssl.key.password = [hidden]
    access.control.allow.methods = 
    ssl.keymanager.algorithm = 
    metrics.sample.window.ms = 30000
    client.ssl.truststore.password = [hidden]
    client.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    kafka.rest.resource.extension.class = 
    client.sasl.mechanism = GSSAPI
 (io.confluent.kafkarest.KafkaRestConfig:175)
[2018-01-09 14:44:06,954] INFO Logging initialized @402ms (org.eclipse.jetty.util.log:186)
[2018-01-09 14:44:07,154] ERROR Server died unexpectedly:  (io.confluent.kafkarest.KafkaRestMain:63)
java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured
    at io.confluent.kafkarest.KafkaRestApplication.setupInjectedResources(KafkaRestApplication.java:104)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:83)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:45)
    at io.confluent.rest.Application.createServer(Application.java:157)
    at io.confluent.rest.Application.start(Application.java:495)
    at io.confluent.kafkarest.KafkaRestMain.main(KafkaRestMain.java:56)
ESDGH-C02K648W:confluent-4.0.0 user$
klsxnrf1

klsxnrf11#

kafka rest start脚本将属性文件作为参数。必须在命令行中传递./etc/kafka rest/kafka-rest.properties。 bin/kafka-rest-start ./etc/kafka-rest/kafka-rest.properties

u1ehiz5o

u1ehiz5o2#

kafka rest start脚本将属性文件作为参数。这将在您链接的快速入门中进一步记录。

相关问题