sklearn TfidfVectorizer自定义ngram,不包含正则表达式字符

问题描述 投票:0回答:1

我想使用sklearn TfidfVectorizer执行自定义ngram向量化。生成的ngram不应包含来自给定正则表达式模式的任何字符。不幸的是,当analyzer='char'(ngram模式)时,自定义标记化函数将被完全忽略。请参见以下示例:

import re
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer

pattern = re.compile(r'[\.-]'). # split on '.' and on '-'

def tokenize(text):
    return pattern.split(text)

corpus = np.array(['abc.xyz', 'zzz-m.j'])

# word vectorization
tfidf_vectorizer = TfidfVectorizer(tokenizer=tokenize, analyzer='word', stop_words='english')
tfidf_vectorizer.fit_transform(corpus)
print(tfidf_vectorizer.vocabulary_)
# Output -> {'abc': 0, 'xyz': 3, 'zzz': 4, 'm': 2, 'j': 1}
# This is ok!

# ngram vectorization
tfidf_vectorizer = TfidfVectorizer(tokenizer=tokenize, analyzer='char', ngram_range=(2, 2))
tfidf_vectorizer.fit_transform(corpus)
print(tfidf_vectorizer.vocabulary_)
# Output -> {'ab': 3, 'bc': 4, 'c.': 5, '.x': 2, 'xy': 7, 'yz': 8, 'zz': 10, 'z-': 9, '-m': 0, 'm.': 6, '.j': 1}
# This is not ok! I don't want ngrams to include the '.' and '-' chars used for tokenization

最佳方法是什么?

python scikit-learn nlp tf-idf
1个回答
0
投票

根据documentation,仅当tokenizer时可以使用analyzer=word。这是他们的确切字眼:

tokenizer(默认=无)覆盖字符串标记化步骤,同时保留预处理和n-gram生成步骤。仅在分析器=='word'时适用。

© www.soinside.com 2019 - 2024. All rights reserved.