AttributeError:'spacy.tokens.doc.Doc'对象没有属性'lower'

问题描述 投票:1回答:1

我正在将文本添加到列表中,然后将文本更改为嵌入单词,然后进行机器学习。 “文章”中的“ insts”是使用spacy收集的,但随后我遇到以下错误,有人可以告诉我如何解决此错误吗?我可以将类型“ spacy.tokens.doc.Doc”更改为“ str”吗?

def main(annotations_file, max_insts=-1):
    articles = reader.read_corpus(annotations_file,max_insts=max_insts)
    texts=[]
    random.seed(5)
    random.shuffle(articles)
    # arti = list()
    sect = list()
    label_bef=list()
    label_dur=list()
    label_aft=list()

    for insts in articles:
        for inst in insts:
            texts.append(inst.possessor.doc._.article_title_doc)
            #sect.append(inst.possessor.doc._.section_title_doc)
            label_bef.append(inst.labels['BEF'])
            label_dur.append(inst.labels['DUR'])
            label_aft.append(inst.labels['AFT'])

    embeddings_index = {}
    with open('glove.6B.100d.txt') as f:
        for line in f:
            word, coefs = line.split(maxsplit=1)
            coefs = np.fromstring(coefs, 'f', sep=' ')
            embeddings_index[word] = coefs

    tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
    tokenizer.fit_on_texts(texts)
    word_index = tokenizer.word_index
Traceback (most recent call last):
   File "sample.py", line 117, in <module>
     main(args.ANNOTATIONS_FILE, args.max_articles)
   File "sample.py", line 51, in main
     tokenizer.fit_on_texts(texts)
   File "/home/huweilong/miniconda3/envs/nre/lib/python3.6/site-packages/keras_preprocessing/text.py", line 223, in fit_on_texts
self.split)
   File "/home/huweilong/miniconda3/envs/nre/lib/python3.6/site-packages/keras_preprocessing/text.py", line 43, in text_to_word_sequence
     text = text.lower()
   AttributeError: 'spacy.tokens.doc.Doc' object has no attribute 'lower'
python tokenize spacy doc
1个回答
0
投票

您可以通过调用doc.text获得spaCy文档的字符串表示形式。

© www.soinside.com 2019 - 2024. All rights reserved.