如何使用pyspark和nltk计算pos标记?

0aydgbwb  于 2021-07-12  发布在  Spark
关注(0)|答案(1)|浏览(311)

我有一些文本或一个大文件,我需要使用nltk和pyspark来计算pos标签的数量。我找不到导入文本文件的方法,所以我尝试添加一个短字符串,但失败了。
计数线需要包含Pypark。


## textfile = sc.textfile('')

## or

## textstring = """This is just a bunch of words to use for this example.  John gave ##them to me last night but Kim took them to work.  Hi Stacy.  ###'''URL:http://example.com'''"""

tstring = sc.parallelize(List(textstring)).collect()

TOKEN_RE = re.compile(r"\b[\w']+\b")

dropURL=text.filter(lambda x: "URL" not in x)

words = dropURL.flatMap(lambda dropURL: dropURL.split(" "))

nltkwords = words.flatMap(lambda words: nltk.tag.pos_tag(nltk.regexp_tokenize(words, TOKEN_RE)))

# word_counts =nltkwords.map(lambda nltkwords: (ntlkwords,1))

nltkwords.take(50)
bejyjqdl

bejyjqdl1#

下面是一个测试字符串的示例。我想你只是少了一步,把绳子按空格分开。否则整行将被删除,因为url在该行中。

import nltk
import re

textstring = """This is just a bunch of words to use for this example.  John gave ##them to me last night but Kim took them to work.  Hi Stacy.  ###'''URL:http://example.com'''"""

TOKEN_RE = re.compile(r"\b[\w']+\b")
text = sc.parallelize(textstring.split(' '))
dropURL = text.filter(lambda x: "URL" not in x)

words = dropURL.flatMap(lambda dropURL: dropURL.split(" "))

nltkwords = words.flatMap(lambda words: nltk.tag.pos_tag(nltk.regexp_tokenize(words, TOKEN_RE)))

nltkwords.collect()

# [('This', 'DT'), ('is', 'VBZ'), ('just', 'RB'), ('a', 'DT'), ('bunch', 'NN'), ('of', 'IN'), ('words', 'NNS'), ('to', 'TO'), ('use', 'NN'), ('for', 'IN'), ('this', 'DT'), ('example', 'NN'), ('John', 'NNP'), ('gave', 'VBD'), ('them', 'PRP'), ('to', 'TO'), ('me', 'PRP'), ('last', 'JJ'), ('night', 'NN'), ('but', 'CC'), ('Kim', 'NNP'), ('took', 'VBD'), ('them', 'PRP'), ('to', 'TO'), ('work', 'NN'), ('Hi', 'NN'), ('Stacy', 'NN')]

要统计pos标记的出现次数,可以执行reducebykey:

word_counts = nltkwords.map(lambda x: (x[1], 1)).reduceByKey(lambda x, y: x + y)

word_counts.collect()

# [('NNS', 1), ('TO', 3), ('CC', 1), ('DT', 3), ('JJ', 1), ('VBZ', 1), ('RB', 1), ('NN', 7), ('VBD', 2), ('PRP', 3), ('IN', 2), ('NNP', 2)]

相关问题