假设我有一个元组列表,top_n
,在文本语料库中找到的顶级n
最常见的双字母组合:
import nltk
from nltk import bigrams
from nltk import FreqDist
bi_grams = bigrams(text) # text is a list of strings (tokens)
fdistBigram = FreqDist(bi_grams)
n = 300
top_n= [list(t) for t in zip(*fdistBigram.most_common(n))][0]; top_n
>>> [('let', 'us'),
('us', 'know'),
('as', 'possible')
....
现在我想用top_n
中的bigrams替换它们的串联实例。例如,假设我们有一个新的变量query
,它是一个字符串列表:
query = ['please','let','us','know','as','soon','as','possible']
会成为
['please','letus', 'usknow', 'as', 'soon', 'aspossible']
经过预期的操作。更明确地说,我想搜索query
的每个元素并检查第i个和第(i + 1)个元素是否在top_n
中;如果他们是,那么用一个连接的二元组,即query[i]
替换query[i+1]
和(query[i], query[i+1]) -> query[i] + query[i+1]
。
有没有办法使用NLTK来做到这一点,或者如果需要在query
中循环每个单词,最好的方法是什么?
鉴于你的代码和查询,如果它们在top_n
中,那些单词将被贪婪地替换为bi-gram,这将有助于:
lookup = set(top_n) # {('let', 'us'), ('as', 'soon')}
query = ['please', 'let', 'us', 'know', 'as', 'soon', 'as', 'possible']
answer = []
q_iter = iter(range(len(query)))
for idx in q_iter:
answer.append(query[idx])
if idx < (len(query) - 1) and (query[idx], query[idx+1]) in lookup:
answer[-1] += query[idx+1]
next(q_iter)
# if you don't want to skip over consumed
# second bi-gram elements and keep
# len(query) == len(answer), don't advance
# the iterator here, which also means you
# don't have to create the iterator in outer scope
print(answer)
结果(例如):
>> ['please', 'letus', 'know', 'assoon', 'as', 'possible']
替代答案:
from gensim.models.phrases import Phraser
from gensim.models import Phrases
phrases = Phrases(text, min_count=1500, threshold=0.01)
bigram = Phraser(phrases)
bigram[query]
>>> ['please', 'let_us', 'know', 'as', 'soon', 'as', 'possible']
不完全是问题中所需的期望输出,但它可以作为替代方案。输入min_count
和threshold
将强烈影响输出。感谢this question here。