python—删除数据集中的停止字

x8goxv8g  于 2021-07-13  发布在  Java
关注(0)|答案(2)|浏览(293)

我正在尝试删除apadas数据集中的stopwords,其中每行都有一个标记化的单词列表,单词列表的格式如下:

['Uno', ',', 'dos', 'One', ',', 'two', ',', 'tres', ',', 'quatro', 'Yes', ',', 'Wooly', 'Bully', 'Watch', 'it', 'now', ',', 'watch', 'it', 'Here', 'he', 'come', ',', 'here', 'he', 'come', 'Watch', 'it', 'now', ',', 'he', 'git', 'ya', 'Matty', 'told', 'Hattie', 'about', 'a', 'thing', 'she', 'saw', 'Had', 'two', 'big', 'horns', 'and', 'a', 'wooly', 'jaw', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', ',', 'yes', 'drive', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', 'Hattie', 'told', 'Matty', '``', 'Let', "'s", 'do', "n't", 'take', 'no', 'chance', 'Let', "'s", 'not', 'be', 'L-seven', ',', 'come', 'and', 'learn', 'to', 'dance', "''", 'Wooly', 'Bully', ',', 'Wooly', 'Bully', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', 'Watch', 'it', 'now', ',', 'watch', 'it', ',', 'watch', 'it', ',', 'watch', 'it', 'Yeah', 'Yeah', ',', 'drive', ',', 'drive', ',', 'drive', 'Matty', 'told', 'Hattie', '``', 'That', "'s", 'the', 'thing', 'to', 'do', 'Get', 'you', 'someone', 'really', 'pull', 'the', 'wool', 'with', 'you', "''", 'Wooly', 'Bully', ',', 'Wooly', 'Bully', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', ',', 'Wooly', 'Bully', 'Watch', 'it', 'now', ',', 'watch', 'it', ',', 'here', 'he', 'come', 'You', 'got', 'it', ',', 'you', 'got', 'it']

要做到这一点,我使用以下代码。 ret = df['tokenized_lyric'].apply(lambda x: [item for item in x if item.lower() not in stops]) ```
print(ret)

这使我得到一个清单如下

e0 [n, , , , n, e, , , w, , , r, e, , , ...
2165 [ , n, r, , , r, , r, , l, , p, r, , , ...

似乎删除了几乎所有的字符。我怎样才能把我设置的停止词去掉呢?
wqnecbli

wqnecbli1#

您正在使用列表遍历字符串的字符。相反,在 lower() ,使用 split() 然后遍历工作标记,如下所示-

print([i for i in 'hi there']) #iterating over the characters
print([i for i in 'hi there'.split()]) #iterating over the words

['h', 'i', ' ', 't', 'h', 'e', 'r', 'e']
['hi', 'there']

试试这个lambda函数-

s = 'Hello World And Underworld'

stops = ['and','or','the']

f = lambda x: [item for item in x.split() if item.lower() not in stops]
f(s)
['hello', 'world', 'underworld']

w、 r.t你的代码,应该是-

df['tokenized_lyric'].apply(lambda x: [item for item in x.split() if item.lower() not in stops])
cvxl0en2

cvxl0en22#

from nltk.corpus import stopwords

# stop words from nltk library

stopwords = stopwords.words('english')

# user defined stop words

custom_stopwords = ['hey', 'hello'] 

# complete list of stop words

complete_stopwords = stopwords + custom_stopwords

# 

df['lyrics_clean'] = df['lyrics'].apply(lambda x: [word for word in x.split() if word not in (complete_stopwords)])

相关问题