map减少词频

woobm2wo  于 2021-07-13  发布在  Hadoop
关注(0)|答案(1)|浏览(285)

我目前正在用java开发一个hadoop项目。我的目标是制作一个Map,减少每个单词的行频率。如中所述,不输出一个字在输入文件中的确切计数次数,而只计算它在输入文件中出现的行数。如果一个单词在一行中出现了多次,那么它应该只被计算一次,因为我们只计算它在一行中出现了多少行。我有一个基本的Map减少工作,我将张贴,但我有点迷失在如何只计算行频率的话,而不是完整的字数。任何帮助都将不胜感激,非常感谢。
Map字数

public class MapWordCount extends Mapper <LongWritable, Text, Text, IntWritable>
{
      private Text wordToken = new Text();
      public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
      {
          StringTokenizer tokens = new StringTokenizer(value.toString(), "[_|$#0123456789<>\\^=\\[\\]\\*/\\\\,;,.\\-:()?!\"']"); //Dividing String into tokens
        while (tokens.hasMoreTokens())
        {
          wordToken.set(tokens.nextToken());
          context.write(wordToken, new IntWritable(1));
        }
      }
    }

还原木材计数

public class ReduceWordCount extends Reducer <Text, IntWritable, Text, IntWritable>
{
      private IntWritable count = new IntWritable();
      public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException
      {
        int valueSum = 0;
        for (IntWritable val : values)
        {
          valueSum += val.get();
        }
        count.set(valueSum);
        context.write(key, count);
      }
    }

驱动程序代码

public class WordCount {
      public static void main(String[] args) throws Exception
      {
        Configuration conf = new Configuration();
        String[] pathArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
        if (pathArgs.length < 2)
        {
          System.err.println("MR Project Usage: wordcount <input-path> [...] <output-path>");
          System.exit(2);
        }
        Job wcJob = Job.getInstance(conf, "MapReduce WordCount");
        wcJob.setJarByClass(WordCount.class);
        wcJob.setMapperClass(MapWordCount.class);
        wcJob.setCombinerClass(ReduceWordCount.class);
        wcJob.setReducerClass(ReduceWordCount.class);
        wcJob.setOutputKeyClass(Text.class);
        wcJob.setOutputValueClass(IntWritable.class);
        for (int i = 0; i < pathArgs.length - 1; ++i)
        {
          FileInputFormat.addInputPath(wcJob, new Path(pathArgs[i]));
        }
        FileOutputFormat.setOutputPath(wcJob, new Path(pathArgs[pathArgs.length - 1]));
        System.exit(wcJob.waitForCompletion(true) ? 0 : 1);
      }
    }
tct7dpnv

tct7dpnv1#

在hadoop的mapreduce的这个用例中,事情出人意料地简单,因为hadoop倾向于逐行读取输入文档,即使使用 FileInputFormat 为mr作业的输入数据的格式显式指定(这远远超出了您的问题范围,但是您可以在hadoop中的此处和此处查看关于Map和文件拆分的信息)。
由于每个Map器示例都将有一行作为其输入,因此您唯一需要担心的是:
1.将文本拆分为单词(在清除标点符号、空格、将它们全部转换为小写等之后),
2.去掉重复的部分,只留下这一行中唯一的词,
3.把每一个独特的字都作为键签上 1 作为价值,经典的字数风格。
为2。你可以使用 HashSet 这是一个java数据结构,它只保留唯一的元素,而忽略重复的元素,将每个令牌加载到它,然后迭代它来写入键值对,并将它们发送到reducer示例。
这种类型的应用程序可以如下所示(我改变了在 Map 函数,因为它似乎没有分割每个单词,而只是在标点符号之间分割):

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

import java.io.IOException;
import java.util.*;

public class LineFreq
{
    public static class MapWordCount extends Mapper <LongWritable, Text, Text, IntWritable>
    {
        private Text wordToken = new Text();
        private static final IntWritable one = new IntWritable(1);

        public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
        {
            // dividing String into tokens
            String[] tokens = value.toString()
                                .replaceAll("\\d+", "")           // get rid of numbers...
                                .replaceAll("[^a-zA-Z ]", " ")    // get rid of punctuation...
                                .toLowerCase()                                      // turn every letter to lowercase...
                                .trim()                                             // trim the spaces...
                                .replaceAll("\\s+", " ")
                                .split(" ");

            Set<String> word_set = new HashSet<String>();   // set to hold all of the unique words (WITHOUT DUPLICATES)

            // add words to word set
            for(String word : tokens)
                word_set.add(word);

            // write each unique word to have one occurrence in this particular line
            for(String word : word_set)
            {
                wordToken.set(word);
                context.write(wordToken, one);
            }

        }
    }

    public static class ReduceWordCount extends Reducer <Text, IntWritable, Text, IntWritable>
    {
        private IntWritable count = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException
        {
            int valueSum = 0;

            for (IntWritable val : values)
              valueSum += val.get();

            count.set(valueSum);
            context.write(key, count);
        }
    }

    public static void main(String[] args) throws Exception
    {
        Configuration conf = new Configuration();
        String[] pathArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

        if (pathArgs.length < 2)
        {
            System.err.println("MR Project Usage: wordcount <input-path> [...] <output-path>");
            System.exit(2);
        }

        Job wcJob = Job.getInstance(conf, "MapReduce WordCount");
        wcJob.setJarByClass(LineFreq.class);
        wcJob.setMapperClass(MapWordCount.class);
        wcJob.setCombinerClass(ReduceWordCount.class);
        wcJob.setReducerClass(ReduceWordCount.class);
        wcJob.setOutputKeyClass(Text.class);
        wcJob.setOutputValueClass(IntWritable.class);
        for (int i = 0; i < pathArgs.length - 1; ++i)
        {
            FileInputFormat.addInputPath(wcJob, new Path(pathArgs[i]));
        }
        FileOutputFormat.setOutputPath(wcJob, new Path(pathArgs[pathArgs.length - 1]));
        System.exit(wcJob.waitForCompletion(true) ? 0 : 1);
    }
}

因此,我们可以使用以下文档作为输入进行测试:

hello world! hello! how are you, world?
i am fine! world world world! hello to you too!
what a wonderful world!
amazing world i must say, indeed

并确认单词频率确实是计算机线路,输出如下:

相关问题