无法建立从docker容器访问macos上运行的datanode的权限

disbfnqx  于 2021-07-13  发布在  Hadoop
关注(0)|答案(1)|浏览(457)

我对hdfs没有经验,我遇到了一个与在macbook上运行hdfs有关的问题。我有一个hdfs客户机,它是在docker容器中启动的,每次我尝试从这个容器向hdfs中放入或获取数据时,都会出现以下错误:

hdfs dfs -put /core-site.xml hdfs://host.docker.internal:9000/abcs
21/03/02 07:28:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/03/02 07:28:48 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1610)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
21/03/02 07:28:48 INFO hdfs.DFSClient: Abandoning BP-1485605719-127.0.0.1-1614607405999:blk_1073741832_1008
21/03/02 07:28:48 INFO hdfs.DFSClient: Excluding datanode 127.0.0.1:9866
21/03/02 07:28:48 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /abcs/core-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

可以清楚地看到,我的客户机(容器)接收到的datanode的ip地址错误(127.0.0.1:9866),应该是192.168.65.2:9866,即host.docker.internal。或我的笔记本电脑的域名(例如我的笔记本电脑)
my core-site.xml:(当然,我的笔记本电脑绑定到etc/hosts中的127.0.0.1)

<configuration>
    <property>         
        <name>fs.defaultFS</name>         
        <value>hdfs://my-laptop:9000</value>     
    </property>
    <property>
          <name>hadoop.tmp.dir</name>
          <value>/Users/Ian_Rakhmatullin/localHadoopTmp</value>
  </property>
</configuration>

hdfs-site.xml:

<configuration>
    <property>         
        <name>dfs.replication</name>        
         <value>1</value>    
     </property>
     <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
    </property>
     <property>
        <name>dfs.datanode.use.datanode.hostname</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.datanode.address</name>
        <value>my-laptop:9866</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>my-laptop:9864</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>my-laptop:9867</value>
    </property>
</configuration>

另一件让我困惑的事情是,通过hdfswebui,我可以看到datanode正在运行 localhost:9866 (127.0.0.1:9866) 但我希望“我的-laptop:9866“也是。
有没有人想到如何解决这个问题?谢谢您。

kgqe7b3p

kgqe7b3p1#

似乎我已经通过以下步骤解决了这个问题:
在hdfs中添加dfs.datanode.hostname属性
hdfs站点xml:

<property>         
        <name>dfs.replication</name>        
         <value>1</value>    
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.datanode.use.datanode.hostname</name>
        <value>true</value>
    </property> 
    <property>
        <name>dfs.datanode.hostname</name>
        <value>my-laptop</value>
    </property>

核心站点xml与我的问题中的相同。
将dfs.client.use.datanode.hostname添加到hdfs客户端的hdfs-site.xml中;

<property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
    </property>

将dns名称(我的笔记本电脑)Map到容器etc/hosts中docker主机(host.docker.internal,我的例子->192.168.65.2)的ip地址

192.168.65.2 my-laptop

使用这种方法,namenode将把datanode的主机名返回给hdfs客户机,然后客户机将使用到host.docker.internal的Map。这就是我需要的。

相关问题