使用Yarn比较器的mapreducepython中的hadoop字数排序

pdkcd3nj  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(295)

我想解决字数计算的问题,并希望得到的结果在相反的排序顺序,根据出现的频率在文件中。
以下是我为此编写的四个文件(两个Map器和两个还原器,因为一个Map还原作业无法解决此问题):
1) Map1.py

import sys
import re

reload(sys)
sys.setdefaultencoding('utf-8') # required to convert to unicode

for line in sys.stdin:
    try:
        article_id, text = unicode(line.strip()).split('\t', 1)
    except ValueError as e:
        continue
    words = re.split("\W*\s+\W*", text, flags=re.UNICODE)
    for word in words:
        print "%s\t%d" % (word.lower(), 1)

2) 异径管1.py

import sys

current_key = None
word_sum = 0

for line in sys.stdin:
    try:
        key, count = line.strip().split('\t', 1)
        count = int(count)
    except ValueError as e:
        continue
    if current_key != key:
        if current_key:
            print "%s\t%d" % (current_key, word_sum)
        word_sum = 0
        current_key = key
    word_sum += count

if current_key:
    print "%s\t%d" % (current_key, word_sum)

3) Map2.py

import sys
import re

reload(sys)
sys.setdefaultencoding('utf-8') # required to convert to unicode

for line in sys.stdin:
    try:
        word, count = line.strip().split('\t', 1)
        count = int(count)
    except ValueError as e:
        continue

    print "%s\t%d" % (word, count)

4) 异径管2.py

import sys

for line in sys.stdin:
    try:
        word, count = line.strip().split('\t', 1)
        count = int(count)
    except ValueError as e:
        continue

    print "%s\t%d" % (word, count)

下面是我在bash环境中运行的两个yarn命令

OUT_DIR="wordcount_result_1"
NUM_REDUCERS=8

hdfs dfs -rm -r -skipTrash ${OUT_DIR} > /dev/null

yarn jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-streaming.jar \
    -D mapred.jab.name="Streaming wordCount" \
    -D mapreduce.job.reduces=${NUM_REDUCERS} \
    -files mapper1.py,reducer1.py \
    -mapper "python mapper1.py" \
    -combiner "python reducer1.py" \
    -reducer "python reducer1.py" \
    -input /test/articles-part-short \
    -output ${OUT_DIR} > /dev/null

OUT_DIR_2="wordcount_result_2"
NUM_REDUCERS=1

hdfs dfs -rm -r -skipTrash ${OUT_DIR_2} > /dev/null

yarn jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-streaming.jar \
    -D mapred.jab.name="Streaming wordCount Rating" \
    -D mapreduce.job.output.key.comparator.class=org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator \
    -D map.output.key.field.separator=\t \
    -D mapreduce.partition.keycomparator.options=-k2,2nr \
    -D mapreduce.job.reduces=${NUM_REDUCERS} \
    -files mapper2.py,reducer2.py \
    -mapper "python mapper2.py" \
    -reducer "python reducer2.py" \
    -input ${OUT_DIR} \
    -output ${OUT_DIR_2} > /dev/null

hdfs dfs -cat ${OUT_DIR_2}/part-00000 | head

这并没有给我正确的答案。有人能解释一下哪里出了问题吗?
另一方面,
mapper2.py 如果我按以下方式打印, print "%d\t%s" % (count, word) 而在 reducer2.py 如果我按以下方式阅读, count, word = line.strip().split('\t', 1) 并将第二个Yarn命令选项编辑为 -D mapreduce.partition.keycomparator.options=-k1,1nr 它给了我正确的答案。
为什么它在上述两种情况下的表现不同?
有人能帮我理解hadoopmapreduce的比较器选项吗?

b5lpy0ml

b5lpy0ml1#

这会有用的

yarn jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-streaming.jar \
    -D mapred.jab.name="Streaming wordCount rating" \
    -D mapreduce.job.output.key.comparator.class=org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator \
    -D mapreduce.partition.keycomparator.options='-k2nr' \
    -D stream.num.map.output.key.fields=2 \
    -D mapred.map.tasks=1 \
    -D mapreduce.job.reduces=1 \
    -files mapper2.py,reducer2.py \
    -mapper "python mapper2.py" \
    -reducer "python reducer2.py" \
    -input /user/jovyan/assignment0_1563877099149160 \
    -output ${OUT_DIR} > /dev/null

相关问题