为什么org.apache.hadoop.io.writable不能转换为org.apache.hadoop.io.intwritable?

cdmah0mi  于 2021-06-01  发布在  Hadoop
关注(0)|答案(2)|浏览(570)

我的mapreduce应用程序如下所示。我想对字符串中的3个值求和

public class StockCount {

public static class MapperClass
        extends Mapper<Object, Text, Text, IntArrayWritable> {

    public void map(Object key, Text value, Context context
    ) throws IOException, InterruptedException {
        String line[] = value.toString().split(",");

        //mgrno,rdate,cusip,shares,sole,shared,no
        //  [0],  [1],  [2],   [3], [4],   [5],[6]

        if (line.length > 5){

                Text mgrno = new Text(line[0]);
                IntWritable[] intArray = new IntWritable[3];
                intArray[0] = new IntWritable(Integer.parseInt(line[4]));
                intArray[1] = new IntWritable(Integer.parseInt(line[5]));
                intArray[2] = new IntWritable(Integer.parseInt(line[6]));

                int[] pass = new int[3];
                pass[0] = Integer.parseInt(line[4]);
                pass[1] = Integer.parseInt(line[5]);
                pass[0] = Integer.parseInt(line[6]);

                IntArrayWritable array = new IntArrayWritable(intArray);

                context.write(mgrno, array);
            }
    }
}

public static class IntSumReducer
        extends Reducer<Text, int[], Text, IntArrayWritable> {

    public void reduce(Text key, Iterable<IntArrayWritable> values,
                       Context context
    ) throws IOException, InterruptedException {
        int sum1 = 0;
        int sum2 = 0;
        int sum3 = 0;
        for (IntArrayWritable val : values) {

            IntWritable[] temp = new IntWritable[3];
            temp = val.get();
            sum1 += temp[0].get();
            sum2 += temp[1].get();
            sum3 += temp[2].get();

        }
        IntWritable[] intArray = new IntWritable[3];
        intArray[0] = new IntWritable(sum1);
        intArray[1] = new IntWritable(sum2);
        intArray[2] = new IntWritable(sum3);
        IntArrayWritable result = new IntArrayWritable(intArray);

        context.write(key, result);
    }
}

由于我想求3个值的和,我定义了一个继承自arraywritable的intarraywritable类。arraywritable包含可写[]-s

import org.apache.hadoop.io.ArrayWritable;
import org.apache.hadoop.io.IntWritable;

public class IntArrayWritable extends ArrayWritable {

    public IntArrayWritable(IntWritable[] values) {
        super(IntWritable.class, values);
    }
    public IntArrayWritable() {
    super(IntWritable.class);
    }

    @Override
    public IntWritable[] get() {
        return (IntWritable[]) super.get();
    }

    @Override
    public String toString() {
        IntWritable[] values = get();
        return values[0].toString() + ", " + values[1].toString() + ", " + 
values[2].toString();
    }
}

我真的不明白为什么它不能强制转换“return(intwriteable[])super.get();”

17/11/21 04:00:26 WARN mapred.LocalJobRunner: job_local1623924180_0001
java.lang.Exception: java.lang.ClassCastException: [Lorg.apache.hadoop.io.Writable; cannot be cast to [Lorg.apache.hadoop.io.IntWritable;
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.ClassCastException: [Lorg.apache.hadoop.io.Writable; cannot be cast to [Lorg.apache.hadoop.io.IntWritable;
        at IntArrayWritable.get(IntArrayWritable.java:15)
        at IntArrayWritable.toString(IntArrayWritable.java:22)
        at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:85)
        at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:104)
        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
        at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

我真的很感激你的帮助。
桑斯!

b0zn9rqh

b0zn9rqh1#

首先, Reducer<Text, int[], 应具有可写类型,而不是 int[] 但是,您可以只使用Map器中的逗号分隔的文本可写值。
只为传递数组而编写自己的可写类没有明显的好处。
您可以从reducer进行解析和求和

i86rm4rw

i86rm4rw2#

我只是在处理同样的事情 TextArrayWritable 班级。对我来说似乎不雅观,但是迭代数组并强制每个元素就成功了。

public class TextArrayWritable extends ArrayWritable{
    public Text[] get() {
        Writable[] temp = super.get();
        if (temp != null) {
            int n = temp.length;
            Text[] items = new Text[n];
            for (int i = 0; i < temp.length; i++) {
                items[i] = (Text)temp[i];
            }
            return items;
        } else {
            return null;
        }   
    }

相关问题