无法初始化mapoutputcollector org.apache.hadoop.mapred.maptask$mapoutputbuffer java.lang.classcastexception:class java.lang.double

wb1gzix0  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(424)

我的代码有问题,这是我的错误:
无法初始化mapoutputcollector org.apache.hadoop.mapred.maptask$mapoutputbuffer java.lang.classcastexception:class java.lang.double
我不知道从哪里来的。这是我设置所有作业的班级代码:

conf.set("stripped", stripped);

        /* Creating the job object for the Hadoop processing */  
        @SuppressWarnings("deprecation")
        Job job = new Job(conf, "calculate error map reduce"); 

        /* Creating Filesystem object with the configuration */  
        FileSystem fs = FileSystem.get(conf);  

        /* Check if output path (args[1])exist or not */  
        if (fs.exists(new Path(output))) {  
            /* If exist delete the output path */  
            fs.delete(new Path(output), true);  
        }
        // Setting Driver class  
        job.setJarByClass(StrippedPartition.class);  

        // Setting the Mapper class  
        job.setMapperClass(MapperCalculateError.class);  

        // Setting the Reducer class  
        job.setReducerClass(ReduceCalculateError.class);  

        // Setting the Output Key class per il mapper 
        job.setOutputKeyClass(Double.class);  
        // Setting the Output value class per il mapper
        job.setOutputValueClass(DoubleWritable.class);

这是我的mapper类:

public static class MapperCalculateError extends Mapper<Object, Text, Double, DoubleWritable>{

        private final static DoubleWritable error1 = new DoubleWritable(1.0);
        private double error,max;
        private ObjectBigArrayBigList<LongBigArrayBigList> Contain = new ObjectBigArrayBigList<LongBigArrayBigList>();
        private ObjectBigArrayBigList<LongBigArrayBigList> Stripped = new ObjectBigArrayBigList<LongBigArrayBigList>();

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {

            Configuration conf = context.getConfiguration();
            String stripped = conf.get("stripped");
            Stripped = new Gson().fromJson(stripped.toString(), ObjectBigArrayBigList.class);

            StringTokenizer itr = new StringTokenizer(value.toString());
            Contain = new Gson().fromJson(value.toString(), ObjectBigArrayBigList.class);

            //stuff in map function, i avoid in this exeple because is not important    
            }
            context.write(max,error1);

        }

这是我的课程:

public static class ReduceCalculateError extends Reducer<Double, DoubleWritable, Double, Double>{

        private double massimo=0;
        private double errore=0;

        //public ReduceCalculateError() {} 

        public void reduce(double max, Iterable<DoubleWritable> error, Context context)  throws IOException, InterruptedException {
            Configuration conf = context.getConfiguration();
            double sum=0;

            //other stuff that i avoid 

            context.write(this.massimo,sum);

        }

我不知道哪里出错了,map和reduce从来没有运行过,因为它显示map:0%reduce:0%

xesrikrc

xesrikrc1#

你所拥有的一切 Double 你需要使用 DoubleWritable . 这是因为hadoop不知道如何序列化 Double ,但他知道如何连载 DoubleWritable .
任何时候你做一个 context.write(...) 你需要确保这两个论点都是正确的 Writable . 例如,Map输出为 context.write(max,error1); 但是 max 是一个 Double ,当它应该是 DoubleWritable .

相关问题