如果单词少于X,则在列表理解中合法化

问题描述 投票:1回答:1

我具有以下功能,该功能接收单词标记列表,以WordNet可读的格式收集词性标记,并使用它来对每个标记进行词形化-我将其应用于单词标记列表:

from nltk import pos_tag
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet as wn

def getWordNetPOS (POStag):
    def is_noun(POStag):
        return POStag in ['NN', 'NNS', 'NNP', 'NNPS']
    def is_verb(POStag):
        return POStag in ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']
    def is_adverb(POStag):
        return POStag in ['RB', 'RBR', 'RBS']
    def is_adjective(POStag):
        return POStag in ['JJ', 'JJR', 'JJS']

    if is_noun(POStag):
        return wn.NOUN
    elif is_verb(POStag):
        return wn.VERB
    elif is_adverb(POStag):
        return wn.ADV
    elif is_adjective(POStag):
        return wn.ADJ
    else:
        # if not noun, verb, adverb or adjective, return noun
        return wn.NOUN

# lemmatize word tokens
def lemmas (wordtokens):
    lemmatizer = WordNetLemmatizer()
    POStag = pos_tag(wordtokens)
    wordtokens = [lemmatizer.lemmatize(token[0], getWordNetPOS(token[1]))
                  for token in POStag]

    return wordtokens

lemmatizedList = []
mylist = [['this','is','my','first','sublist'],['this','is','my','second','sublist']]

for ls in mylist:
    x = lemmas(ls)
    lemmatizedList.append(x)

我想找到一种方法,以将限制词法限制为设置长度(即2)的令牌,但至关重要的是,我还想保留小于此阈值的任何单词的原始形式。我得到的最接近的是在if len(token[0])>2函数内的wordtokens列表理解的末尾添加lemmas,但这仅返回经过词法化的标记。同样,我尝试在else token for token in POStag语句后添加类似于if的内容,但是出现语法错误。为了清楚起见,这就是我的意思:

wordtokens = [lemmatizer.lemmatize(token[0], getWordNetPOS(token[1]))
              for token in POStag if len(token[0])>2
              else token for token in POStag]

我希望它是一个简单的错误,并希望在我自己的身上留下一些Python盲点。

python list-comprehension wordnet
1个回答
1
投票

这实际上只是一个“盲点”。

© www.soinside.com 2019 - 2024. All rights reserved.