hadoop3.2:找不到logger的appender(org.apache.hadoop.mapreduce.v2.app.mrappmaster)

aoyhnmkz  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(341)

我有一个本地hadoop3.2安装:1个主程序+1个工作程序都在我的笔记本电脑上运行。这是一个实验设置,用于在提交到实际集群之前进行快速测试。
一切都很健康:

$ jps
22326 NodeManager
21641 DataNode
25530 Jps
22042 ResourceManager
21803 SecondaryNameNode
21517 NameNode

$ hdfs fsck /         
Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 13:54:59 CEST 2019

Status: HEALTHY
 Number of data-nodes:  1
 Number of racks:       1
 Total dirs:            1
 Total symlinks:        0

Replicated Blocks:
 Total size:    0 B
 Total files:   0
 Total blocks (validated):  0
 Minimally replicated blocks:   0
 Over-replicated blocks:    0
 Under-replicated blocks:   0
 Mis-replicated blocks:     0
 Default replication factor:    1
 Average block replication: 0.0
 Missing blocks:        0
 Corrupt blocks:        0
 Missing replicas:      0

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):    0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:   0
 Under-erasure-coded block groups:  0
 Unsatisfactory placement block groups: 0
 Average block group size:  0.0
 Missing block groups:      0
 Corrupt block groups:      0
 Missing internal blocks:   0
FSCK ended at Wed Sep 04 13:54:59 CEST 2019 in 0 milliseconds

The filesystem under path '/' is HEALTHY

运行提供的pi示例时,出现以下错误:

$ yarn jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar pi 16 1000
Number of Maps  = 16
Samples per Map = 1000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
2019-09-04 13:55:47,665 INFO client.RMProxy: Connecting to ResourceManager at master/0.0.0.0:8032
2019-09-04 13:55:47,887 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001
2019-09-04 13:55:48,020 INFO input.FileInputFormat: Total input files to process : 16
2019-09-04 13:55:48,450 INFO mapreduce.JobSubmitter: number of splits:16
2019-09-04 13:55:48,508 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-09-04 13:55:49,000 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1567598091808_0001
2019-09-04 13:55:49,003 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-09-04 13:55:49,164 INFO conf.Configuration: resource-types.xml not found
2019-09-04 13:55:49,164 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2019-09-04 13:55:49,375 INFO impl.YarnClientImpl: Submitted application application_1567598091808_0001
2019-09-04 13:55:49,411 INFO mapreduce.Job: The url to track the job: http://cyclimse:8088/proxy/application_1567598091808_0001/
2019-09-04 13:55:49,412 INFO mapreduce.Job: Running job: job_1567598091808_0001
2019-09-04 13:55:55,477 INFO mapreduce.Job: Job job_1567598091808_0001 running in uber mode : false
2019-09-04 13:55:55,480 INFO mapreduce.Job:  map 0% reduce 0%
2019-09-04 13:55:55,509 INFO mapreduce.Job: Job job_1567598091808_0001 failed with state FAILED due to: Application application_1567598091808_0001 failed 2 times due to AM Container for appattempt_1567598091808_0001_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2019-09-04 13:55:54.458]Exception from container-launch.
Container id: container_1567598091808_0001_02_000001
Exit code: 1

[2019-09-04 13:55:54.464]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

[2019-09-04 13:55:54.465]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

For more detailed output, check the application tracking page: http://cyclimse:8088/cluster/app/application_1567598091808_0001 Then click on links to logs of each attempt.
. Failing the application.
2019-09-04 13:55:55,546 INFO mapreduce.Job: Counters: 0
Job job_1567598091808_0001 failed!

似乎log4j的配置有问题: No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). . 但是它使用的是默认配置( $HADOOP_CONF_DIR/log4j.properties ).
执行之后,hdfs状态如下所示:

$ hdfs fsck /        
Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 14:01:43 CEST 2019

/tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.jar:  Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741841_1017. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).

/tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.split:  Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741842_1018. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).

Status: HEALTHY
 Number of data-nodes:  1
 Number of racks:       1
 Total dirs:            11
 Total symlinks:        0

Replicated Blocks:
 Total size:    510411 B
 Total files:   20
 Total blocks (validated):  20 (avg. block size 25520 B)
 Minimally replicated blocks:   20 (100.0 %)
 Over-replicated blocks:    0 (0.0 %)
 Under-replicated blocks:   2 (10.0 %)
 Mis-replicated blocks:     0 (0.0 %)
 Default replication factor:    1
 Average block replication: 1.0
 Missing blocks:        0
 Corrupt blocks:        0
 Missing replicas:      18 (47.36842 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):    0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:   0
 Under-erasure-coded block groups:  0
 Unsatisfactory placement block groups: 0
 Average block group size:  0.0
 Missing block groups:      0
 Corrupt block groups:      0
 Missing internal blocks:   0
FSCK ended at Wed Sep 04 14:01:43 CEST 2019 in 5 milliseconds

The filesystem under path '/' is HEALTHY

因为我在网上找不到任何解决办法,所以我来了:)。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题