gensim.corpora.Dictionary是否已保存术语频率?
从gensim.corpora.Dictionary
,可以获取单词的文档出现频率(即某个特定单词出现在多少文档中:]
gensim.corpora.Dictionary
[out]:
from nltk.corpus import brown
from gensim.corpora import Dictionary
documents = brown.sents()
brown_dict = Dictionary(documents)
# The 100th word in the dictionary: 'these'
print('The word "' + brown_dict[100] + '" appears in', brown_dict.dfs[100],'documents')
还有The word "these" appears in 1213 documents
函数可以删除第n个最频繁的标记:
filter_n_most_frequent(remove_n)
过滤掉出现在文档中的“ remove_n”最常用令牌。修剪后,缩小单词ID中的空白。
注意:由于间隙的缩小,在调用此函数之前和之后,同一单词可能具有不同的单词ID!
filter_n_most_frequent(remove_n)
函数是否根据文档频率或术语频率去除第n个最频繁的频率?
如果是后者,是否有某种方法可以访问filter_n_most_frequent(remove_n)
对象中单词的词频?
否,filter_n_most_frequent
不保存词频。您可以gensim.corpora.Dictionary
。该类仅存储以下成员变量:
gensim.corpora.Dictionary
这意味着该类中的所有内容都将频率定义为文档频率,而不是术语频率,因为后者永远不会全局存储。这适用于see the source code here以及所有其他方法。
您能做这样的事情吗?
self.token2id = {} # token -> tokenId
self.id2token = {} # reverse mapping for token2id; only formed on request, to save memory
self.dfs = {} # document frequencies: tokenId -> in how many documents this token appeared
self.num_docs = 0 # number of documents processed
self.num_pos = 0 # total number of corpus positions
self.num_nnz = 0 # total number of non-zeroes in the BOW matrix
字典没有它,但是语料库有。
filter_n_most_frequent(remove_n)
一种从bow表示中计算项频的有效方法,而不是创建密集的矢量。
dictionary = corpora.Dictionary(documents)
corpus = [dictionary.doc2bow(sent) for sent in documents]
vocab = list(dictionary.values()) #list of terms in the dictionary
vocab_tf = [dict(i) for i in corpus]
vocab_tf = list(pd.DataFrame(vocab_tf).sum(axis=0)) #list of term frequencies
我有一个简单的问题。看来单词的频率是隐藏的,无法在对象中访问。不知道为什么它会使测试和验证变得痛苦。我所做的是将字典导出为文本。.
# Term frequency
# load dictionary
dictionary = corpora.Dictionary.load('YourDict.dict')
# load corpus
corpus = corpora.MmCorpus('YourCorpus.mm')
CorpusTermFrequency = array([[(dictionary[id], freq) for id, freq in cp] for cp in corpus])
在该文本文件中,它们具有三列。例如,单词“ summit”,“ summon”和“ sumo”
关键词频率
10峰会1227
3658召唤118
8477相扑40
我找到了一个解决方案,.cfs是单词frequency ..参见corpus = [dictionary.doc2bow(sent) for sent in documents]
vocab_tf={}
for i in corpus:
for item,count in dict(i).items():
if item in vocab_tf:
vocab_tf[item]+=count
else:
vocab_tf[item] = count
dictionary.save_as_text('c:\\research\\gensimDictionary.txt')
峰1227
简单