如何在hadoop中查看损坏的文件?

h9a6wy2h  于 2021-05-30  发布在  Hadoop
关注(0)|答案(0)|浏览(420)

我的hadoop机器上有一些损坏的文件,我想把它们转移到另一台计算机上,看看里面有什么。
我试着去做 hadoop fsck -copyToLocal /dir1/ /dir2/ . 它什么也不给。当我这么做的时候 hadoop fs -copyToLocal /dir1/ /dir2/ 或者 hadoop dfs -copyToLocal /dir1/ /dir2/ . 它写道:

/opt/atsd/hadoop/bin/hadoop fs -copyToLocal /hbase/ /home/axibase/Documents/new/

15/04/15 07:12:32 WARN hdfs.DFSClient: Failed to connect to /192.168.1.211:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/192.168.1.211:59175, remote=/192.168.1.211:50010, for file /hbase/atsd_d/06ea5db6b3cda82baa0d8af17cc36fed/r/24a1aa779e2c422db84369bfe2236003.edf12e41aa2aaa845eb092d661fb836e, for block 1601256999140141614_1170
15/04/15 07:12:32 INFO hdfs.DFSClient: Could not obtain block blk_1601256999140141614_1170 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
15/04/15 07:12:35 WARN hdfs.DFSClient: Failed to connect to /192.168.1.211:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/192.168.1.211:59176, remote=/192.168.1.211:50010, for file /hbase/atsd_d/06ea5db6b3cda82baa0d8af17cc36fed/r/24a1aa779e2c422db84369bfe2236003.edf12e41aa2aaa845eb092d661fb836e, for block 1601256999140141614_1170
15/04/15 07:12:35 INFO hdfs.DFSClient: Could not obtain block blk_1601256999140141614_1170 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
15/04/15 07:12:38 WARN hdfs.DFSClient: Failed to connect to /192.168.1.211:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/192.168.1.211:59177, remote=/192.168.1.211:50010, for file /hbase/atsd_d/06ea5db6b3cda82baa0d8af17cc36fed/r/24a1aa779e2c422db84369bfe2236003.edf12e41aa2aaa845eb092d661fb836e, for block 1601256999140141614_1170
15/04/15 07:12:38 INFO hdfs.DFSClient: Could not obtain block blk_1601256999140141614_1170 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
15/04/15 07:12:41 WARN hdfs.DFSClient: Failed to connect to /192.168.1.211:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/192.168.1.211:59178, remote=/192.168.1.211:50010, for file /hbase/atsd_d/06ea5db6b3cda82baa0d8af17cc36fed/r/24a1aa779e2c422db84369bfe2236003.edf12e41aa2aaa845eb092d661fb836e, for block 1601256999140141614_1170
15/04/15 07:12:41 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_1601256999140141614_1170 file=/hbase/atsd_d/06ea5db6b3cda82baa0d8af17cc36fed/r/24a1aa779e2c422db84369bfe2236003.edf12e41aa2aaa845eb092d661fb836e
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2269)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:87)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:248)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:272)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:272)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:272)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:272)
        at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:199)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1769)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)

我也试着去做 hadoop fsck -move . 它会删除损坏的文件,但我找不到它们。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题