kafka hdfs 2接收器连接器无法在hdfs上写入

2ic8powd  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(670)

以下是我的kafka连接器json文件:

curl -s -k -X POST  http://cpnode.local.lan:8083/connectors -H "Content-Type: application/json" --data '{
"name":"jdbc-Hdfs2-Sink-Connector",
"config":{
"tasks.max":"1",
"batch.size":"1000",
"batch.max.rows":"1000",
"hdfs.poll.interval.ms":"500",
"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector",
"hdfs.url":"hdfs://hadoopnode.local.lan:9000",
"topics":"BookList2",
"flush.size":"1",
"confluent.topic.bootstrap.servers":"cpnode.local.lan:9092",
"confluent.topic.replication.factor":"1",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schemas.enable":"true",
"value.converter.schema.registry.url":"http://cpnode.local.lan:8081",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schemas.enable":"true",
"key.converter.schema.registry.url":"http://cpnode.local.lan:8081"
}
}' | jq '.'

尝试使用此连接器时,出现以下错误:

{
  "name": "jdbc-Hdfs2-Sink-Connector",
  "connector": {
    "state": "RUNNING",
    "worker_id": "192.168.1.153:8083"
  },
  "tasks": [
    {
      "id": 0,
      "state": "FAILED",
      "worker_id": "192.168.1.153:8083",
      "trace": "org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.security.AccessControlException: Permission denied: user=cp-user, access=WRITE, inode=\"/\":hadoop:supergroup:drwxr-xr-x

我试过了 export HADOOP_USER_NAME=hdfs 以及hadoop配置hdfs-site.xml

<property>
   <name>dfs.permissions</name>
   <value>false</value>
</property>

但我想要一个不影响安全的解决方案。
cp user是我的合流平台用户的名称。。。合流和hdf都在不同的vm上
提前谢谢。。。。

uyhoqukh

uyhoqukh1#

您的用户:user=cp user,
是在试图 access=WRITE ,
去那个地方 inode=\"/\" 具有的用户/组所有权hadoop:supergroup:drwxr-xr-x
可能的解决方案(不重叠):
改变 cp-userhadoop (我假设您使用的是docker容器?如果是,请参阅 user Docker 指令。否则, export HADOOP_USER_NAME=hadoop )
创建和添加 cp-user 到hadoop集群和所有datanodes的namenodes的unix帐户
使用kerberos

相关问题