Hadoop mapreduce代码失败,状态为FAQs,原因是:不适用

1cklez4t  于 5个月前  发布在  Hadoop
关注(0)|答案(1)|浏览(52)

我正在尝试运行下面的Hadoop MapReduce程序。

public static class MovieFilterMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

    private Text movieId = new Text();
    private IntWritable one = new IntWritable(1);

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] columns = value.toString().split(",");

        if (columns.length >= 8) {
            double popularity = Double.parseDouble(columns[5]);
            double voteAverage = Double.parseDouble(columns[6]);
            double voteCount = Double.parseDouble(columns[7]);

            if (popularity > 500.0 && voteAverage > 8.0 && voteCount > 10000.0) {
                movieId.set(columns[1]); // Assuming 'id' column contains movie IDs
                context.write(movieId, one);
            }
        }
    }
}

public static class MovieCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable value : values) {
            sum += value.get();
        }
        context.write(key, new IntWritable(sum));
    }
}

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "Movie Analysis");
    job.setJarByClass(MovieAnalysis.class);
    job.setMapperClass(MovieFilterMapper.class);
    job.setReducerClass(MovieCountReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
}

字符串
}
但是当运行代码时,我得到了一个像下面这样的错误


的数据

dwthyt8l

dwthyt8l1#

您的代码在字符串“popularity”上使用parseDouble
如果你正在解析CSV文件,那么Mapreduce不会自动跳过列标题.如果你使用Hive或SparkSQL而不是MapReduce,

相关问题