hadoop2.9.0不兼容群集id

7vhp5slm  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(193)

我试图使用本教程在伪分布式模式下使用hadoop2.9.0。datanode没有启动,日志文件显示错误为name node和data node创建了不兼容的clusterID。我在这个问题上找到了几个这样的答案。所有答案都建议更改日志文件中显示的版本文件中的clusterid(我正在粘贴下面日志文件的相关内容)。但是,当我阅读版本文件时,我发现所有3个版本文件中的clusterid相同,错误日志文件显示的内容完全不同。你知道怎么回事吗?如果您需要任何其他信息,请发表评论。
下面是错误日志文件的内容:

2018-02-17 21:58:00,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2018-02-17 21:58:00,499 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-uname/dfs/data/in_use.lock acquired by nodename 24965@mname.host.edu
2018-02-17 21:58:00,503 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-uname/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-uname/dfs/data: namenode clusterID = CID-08ce647c-0922-4da6-accb-15620161d0b0; datanode clusterID = CID-130d222c-d2cf-4509-bde4-e58637bf9b0c
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-02-17 21:58:00,510 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid c8b193d5-50e2-4983-8831-a5ce4820e58f) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-02-17 21:58:00,510 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid c8b193d5-50e2-4983-8831-a5ce4820e58f) service to localhost/127.0.0.1:9000

下面是/tmp/hadoop uname/dfs/data/current/version的内容


# Sun Feb 18 18:07:28 2018

storageID=DS-c1c0868c-4ce0-45bf-8e76-4223d46587b1
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=0
datanodeUuid=ad38f7c2-f4ac-466f-8330-f06488df73f8
storageType=DATA_NODE
layoutVersion=-57

下面是/tmp/hadoop uname/dfs/name/current/version的内容


# Sun Feb 18 18:04:19 2018

namespaceID=1735122419
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=1518995059388
storageType=NAME_NODE
blockpoolID=BP-1715794989-127.0.1.1-1518995059388
layoutVersion=-63

下面是/tmp/hadoop uname/dfs/namesecondary/current/version的内容


# Sun Feb 18 17:46:57 2018

namespaceID=1735122419
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=1518992971549
storageType=NAME_NODE
blockpoolID=BP-2107146081-127.0.1.1-1518992971549
layoutVersion=-63

etc/hdfs-site.xml就是这样的:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
        <name>dfs.replication</name>
                <value>1</value>
                    </property>
</configuration>

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题