启动mapreduce作业在群集中失败,退出代码为:-1000,并且job.jar不存在

px9o7tmv  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(379)

我正在尝试用java代码启动mapreduce作业,并将该作业提交给yarn。但出现以下错误:

2018-08-26 00:46:26,075 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-08-26 00:46:27,526 INFO  [main] client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at hdcluster01/10.211.55.22:8032
2018-08-26 00:46:28,135 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2018-08-26 00:46:28,217 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(280)) - Total input paths to process : 1
2018-08-26 00:46:28,254 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
2018-08-26 00:46:28,364 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_1535213323614_0008
2018-08-26 00:46:28,484 INFO  [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(204)) - Submitted application application_1535213323614_0008
2018-08-26 00:46:28,506 INFO  [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://hdcluster01:8088/proxy/application_1535213323614_0008/
2018-08-26 00:46:28,506 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_1535213323614_0008
2018-08-26 00:46:32,536 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_1535213323614_0008 running in uber mode : false
2018-08-26 00:46:32,537 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 0% reduce 0%
2018-08-26 00:46:32,547 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) - Job job_1535213323614_0008 failed with state FAILED due to: Application application_1535213323614_0008 failed 2 times due to AM Container for appattempt_1535213323614_0008_000002 exited with  exitCode: -1000 due to: File file:/tmp/hadoop-yarn/staging/nasuf/.staging/job_1535213323614_0008/job.jar does not exist
.Failing this attempt.. Failing the application.
2018-08-26 00:46:32,570 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 0

错误:

Job job_1535213323614_0008 failed with state FAILED due to: Application application_1535213323614_0008 failed 2 times due to AM Container for appattempt_1535213323614_0008_000002 exited with  exitCode: -1000 due to: File file:/tmp/hadoop-yarn/staging/nasuf/.staging/job_1535213323614_0008/job.jar does not exist
.Failing this attempt.. Failing the application.

我不明白为什么我会犯这个错误。我可以在命令行中成功运行jar文件,但在java代码中失败。我检查了路径,路径/tmp/hadoop/甚至不存在。而本地用户是nasuf,运行hadoop的用户是parallels,不是同一个。本地操作系统是macos,运行在centos7中的hadoop。
Map程序代码如下所示:

public class WCMapper extends Mapper<LongWritable, Text, Text, LongWritable> {

    @Override
    protected void map(LongWritable key, Text value, Context context)
            throws IOException, InterruptedException {

        String line = value.toString();
        String[] words = StringUtils.split(line, " ");

        for (String word: words) {
            context.write(new Text(word), new LongWritable(1));
        }

    }
}

减速器代码如下:

public class WCReducer extends Reducer<Text, LongWritable, Text, LongWritable>{

    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) 
            throws IOException, InterruptedException {

        long count = 0;
        for (LongWritable value: values) {
            count += value.get();
        }

        context.write(key, new LongWritable(count));

    }

}

运行代码如下:

public class WCRunner {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        conf.set("mapreduce.job.jar", "wc.jar");
        conf.set("mapreduce.framework.name", "yarn");
        conf.set("yarn.resourcemanager.hostname", "hdcluster01");
        conf.set("yarn.nodemanager.aux-services", "mapreduce_shuffle");
        Job job = Job.getInstance(conf);

        job.setJarByClass(WCRunner.class);

        job.setMapperClass(WCMapper.class);
        job.setReducerClass(WCReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        FileInputFormat.setInputPaths(job, new Path("hdfs://hdcluster01:9000/wc/srcdata"));

        FileOutputFormat.setOutputPath(job, new Path("hdfs://hdcluster01:9000/wc/output3"));

        job.waitForCompletion(true);

    }

}

有人能帮忙吗?非常感谢!

velaa5lx

velaa5lx1#

我已经解决了这个问题。只需将core-site.xml放入类路径或在代码中添加以下配置:

conf.set("hadoop.tmp.dir", "/home/parallels/app/hadoop-2.4.1/data/");

相关问题