连接到ssl安全kafka的问题

kq4fsx7k  于 2021-06-08  发布在  Kafka
关注(0)|答案(2)|浏览(526)

我试图让kafka spark流媒体消费者与一个受ssl保护的kafka代理一起工作。使用者运行在一个kerberised hadoop集群上。因此,我的用户代码使用以下配置:

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="(username)" password="(password)"; 

security.protocol=SASL_SSL 

sasl.mechanism=PLAIN

耗电元件发生故障,出现以下异常

Caused by: java.lang.NullPointerException

at org.apache.kafka.common.security.plain.PlainSaslServer$PlainSaslServerFactory.getMechanismNames(PlainSaslServer.java:163) ~[kafka-clients-0.10.0-kafka-2.1.0.jar:?]

at org.apache.hadoop.security.SaslRpcServer$FastSaslServerFactory.<init>(SaslRpcServer.java:381) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.security.SaslRpcServer.init(SaslRpcServer.java:186) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:570) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:418) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:314) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider$DefaultProxyFactory.createProxy(ConfiguredFailoverProxyProvider.java:68) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:152) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:75) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:66) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:181) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:730) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:673) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155) ~[hadoop-hdfs-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2816) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[hadoop-common-2.6.0-cdh5.12.1.jar:?]

at org.apache.spark.internal.io.SparkHadoopWriter.open(SparkHadoopWriter.scala:82) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1133) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1125) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

at org.apache.spark.scheduler.Task.run(Task.scala:108) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) ~[spark-core_2.11-2.2.0.cloudera2.jar:2.2.0.cloudera2]

... 3 more

经过初步分析,我发现hadoop commons saslrpcserver类inits fastsasserverfactory具有空属性Map。kafka客户机中的plainsaslserver类试图从这个Map中获取一个npe。
我错过什么了吗?网上关于这个问题的帖子不多,所以请指点一下。

z3yyvxxp

z3yyvxxp1#

我确实面临同样的问题,在我的例子中,我将jaas conf设置为系统属性,文件系统使用jaas conf而不是kerberos进行身份验证,后来我将其更改为 sasl.jaas.config 我还得更新我的电脑 kafka-client 版本来自 0.10.0.00.10.2.1 因为 sasl.jaas.config 支持属性 0.10.2.1 向前。

9njqaruj

9njqaruj2#

我已经解决了这个问题。这个问题是由 kafka-clients 版本。您可以升级 kafka-clients 版本到 0.11 . 细节在这里。
Kafka版本 0.11.0 客户可以与版本对话 0.10.0 或者新的经纪人。所以你可以放心升级!

相关问题