写入hdfs错误:没有选择足够的副本,原因:{NO_REQUIRED_STORAGE_TYPE=1}

axr492tv  于 10个月前  发布在  HDFS
关注(0)|答案(1)|浏览(404)

我遇到了类似这样的问题:
https://stackoverflow.com/questions/52809233/failed-to-place-enough-replicas-expected-size-is-1-but-only-0-storage-types-can
当存储策略设置为ALL_SSD时,my name node会记录在$HADOOP_HOME/logs/中:

2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: Choosing random from 4 available nodes on node /default-rack, scope=/default-rack, excludedScope=null, excludeNodes=[192.168.132.41:9866]. numOfDatanodes=5.
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: nthValidToReturn is 0
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: Chosen node 192.168.132.44:9866 from first random
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: chooseRandom returning 192.168.132.44:9866
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: Failed to find datanode (scope="" excludedScope="/default-rack"). numOfDatanodes=0
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.net.NetworkTopology: No node to choose.
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: [
  Datanode None is not chosen since required storage types are unavailable  for storage type DISK.
2023-05-21 09:17:31,380 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-05-21 09:17:31,380 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to choose remote rack (location = ~/default-rack), fallback to local rack
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: 
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:914)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:774)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:566)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:478)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:524)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:350)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:170)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:195)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2307)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
2023-05-21 09:17:31,380 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1113051428_39314554, replicas=192.168.132.43:9866, 192.168.132.41:9866, 192.168.132.44:9866 for /Hakim/archive_players/partitions=1/date=2023-05-20/part-00003-dfce3f68-b1f5-45eb-8f91-43da0ae42139.c000.snappy.parquet

字符串
该错误日志对于每次写入操作重复出现,直到名称节点崩溃。
我做了我能做的一切,但仍然存在。我尝试了ONE_SSD、ALL_SSD、HOT、COLD的所有存储策略
当我运行lsblk
这是所有节点(namenode和所有datanode)的结果:x1c 0d1x的数据
/hdfs分区是所有集群节点中数据节点和namenode数据的位置。
下图所示lvm类型由物理SSD组成:
lsblk -d -n -o name,rota的结果是:



我的hdfs-site.xml配置:

<configuration>
        <property>
                <name>dfs.replication.min</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.replication.max</name>
                <value>3</value>
        </property>

        <property>
                <name>dfs.name.dir</name>
                <value>file:///hdfs/hadoop_data/hdfs/nameNode</value>
        </property>

        <property>
                <name>dfs.data.dir</name>
                <value>file:///hdfs/hadoop_data/hdfs/dataNode</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
                <name>dfs.namenode.acls.enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.storage.policy.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>hadoop.security.hdfs.umask-mode</name>
                <value>000</value>
        </property>
        <property>
                <name>dfs.blocksize</name>
                <value>536870912</value>
        </property>

</configuration>


我的core-site.xml配置:

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://192.168.132.37:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/hdfs/hadoop_data/hdfs/tempDir</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>hadoop.proxyuser.simra.groups</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.simra.hosts</name>
                <value>*</value>
        </property>
                <property>
                <name>hadoop.proxyuser.server.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.server.groups</name>
                <value>*</value>
                <name>hadoop.user.group.static.mapping.overrides</name>
                <value>dr.who=dr.who,user1,user2,user3,user4,user5,user6;</value>
        </property>


检查数据阳极端口:



my namenode web ui概述:



x1c4d 1x的

update:有一点,当我想用pyspark写一个partquet到hdfs的时候,Df会根据pyspark的分区规则被划分为n个文件(默认n= spark worker core),有些文件写成功了,有些则会出现错误。

我花了一个多月没有结果,我很困惑该怎么办。有人能帮忙吗?

cedebl8k

cedebl8k1#

我遇到了类似的问题。虽然我没有这个问题,但它可能会帮助其他人遇到这个问题。
在我的例子中,HDFS被配置为使用2个存储目录,存储策略将特定HDFS路径限制为特定存储目录。

# in hdfs-site.xml:
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file://data-0, [SSD]file://data-1</value>
</property>

字符串
HDFS写入失败,因为保存第一个数据目录的磁盘已满(由于我们有hdfs storagepolicies,仅允许此目录)。请注意,其他数据目录未满,因此HDFS UI显示磁盘容量小于100%(这非常令人困惑)。

相关问题