这可能是显而易见的,但是,我想不出来。
每次启动源连接器时,它都无法读取存储在文件中的偏移量,错误如下:
21:05:01:519 | ERROR | pool-1-thread-1 | o.a.k.c.s.OffsetStorageReaderImpl | CRITICAL: Failed to deserialize offset data when getting offsets for tas
k with namespace zohocrm-source-calls. No value for this data will be returned, which may break the task or cause it to skip some data. This could ei
ther be due to an error in the connector implementation or incompatible schema.
org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additiona
l fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:309)
以下是我的standaloneconfig值:
access.control.allow.methods =
access.control.allow.origin =
bootstrap.servers = [localhost:9092]
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
key.converter = class io.confluent.connect.avro.AvroConverter
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.file.filename = maxoptra-data.offset
rest.advertised.host.name = null
rest.advertised.port = null
rest.host.name = null
rest.port = 8083
task.shutdown.graceful.timeout.ms = 5000
value.converter = class io.confluent.connect.avro.AvroConverter
这是我的连接器配置:
connector.class = com.maxoptra.data.zoho.connect.ZohoCrmSourceConnector
key.converter = null
name = zohocrm-source-calls
tasks.max = 1
transforms = null
value.converter = null
请告知。
谢谢您
2条答案
按热度按时间4ktjp1zp1#
kafka connect avro转换器与schema registry集成,用于将kafka connect的数据转换为avro格式或从avro格式转换数据。因为您对value和key使用avroconverter,所以必须为各自的模式注册表设置url。
wgmfuz8q2#
套
key.converter.schemas.enable=true
以及value.converter.schemas.enable=true
. 这将使jsonconverter尝试解释您的模式,而不是反序列化与这里的模式不匹配的默认模式