testdfsio benchmark-错误的数据节点

vql8enpb  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(194)

我正在尝试运行testdfsio-write测试来测试我的hadoopYarn集群。
1 mb文件大小的结果如下:

17/02/13 16:29:56 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
17/02/13 16:29:56 INFO fs.TestDFSIO:            Date & time: Mon Feb 13 16:29:56 IST 2017
17/02/13 16:29:56 INFO fs.TestDFSIO:        Number of files: 10
17/02/13 16:29:56 INFO fs.TestDFSIO: Total MBytes processed: 10.0
17/02/13 16:29:56 INFO fs.TestDFSIO:      Throughput mb/sec: 0.08520645524104906
17/02/13 16:29:56 INFO fs.TestDFSIO: Average IO rate mb/sec: 0.11449315398931503
17/02/13 16:29:56 INFO fs.TestDFSIO:  IO rate std deviation: 0.051399135678609
17/02/13 16:29:56 INFO fs.TestDFSIO:     Test exec time sec: 59.722

对于10MB的文件大小,结果是

17/02/13 17:24:23 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
17/02/13 17:24:23 INFO fs.TestDFSIO:            Date & time: Mon Feb 13 17:24:23 IST 2017
17/02/13 17:24:23 INFO fs.TestDFSIO:        Number of files: 10
17/02/13 17:24:23 INFO fs.TestDFSIO: Total MBytes processed: 100.0
17/02/13 17:24:23 INFO fs.TestDFSIO:      Throughput mb/sec: 0.13296139598828877
17/02/13 17:24:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 0.1448635458946228
17/02/13 17:24:23 INFO fs.TestDFSIO:  IO rate std deviation: 0.044048737962458215
17/02/13 17:24:23 INFO fs.TestDFSIO:     Test exec time sec: 135.863

但是,当我尝试将文件大小增加到100 mb时,不断出现以下错误:

....
17/02/13 17:28:33 INFO mapreduce.Job:  map 67% reduce 0%
17/02/13 17:28:43 INFO mapreduce.Job:  map 73% reduce 0%
17/02/13 17:29:45 INFO mapreduce.Job:  map 67% reduce 0%
17/02/13 17:29:46 INFO mapreduce.Job: Task Id : attempt_1486983847790_0003_m_000003_0, Status : FAILED
Error: java.io.IOException: All datanodes DatanodeInfoWithStorage[172.16.33.70:50010,DS-f6a1b4fe-66f7-4b6d-9164-f15f371471e0,DISK] are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1109)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:871)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:401)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
....

乔布斯先生陷入困境。我的3个机群(centos7操作系统,4核和32 gb内存-每个)是否只能写入10 mb的文件?
致以最诚挚的问候

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题