SpaCy机器学习模型正在部分捕获更长的实体。如何解决?

问题描述 投票:0回答:1

我已经使用预先存在的en_core_web_sm-2.2.0模型在我的数据上训练了一个spaCy模型。我的数据中有些实体是经过训练的模型部分捕获的。

for text in ['KOYA MOTORS PRIVATE LTD.','KOYAL MOTORS PRIVATE LTD.' , 'PUTTAR MOTORS LIMITED' , 'BRENSON MOTORS LIMITED','MITASHI LIMITED','FEDERATION OF KARNATAKA CHAMBERS OF COMMERCE & INDUSTRY' ]:
    print("#####################")
    print(text , nlp_trained(text).ents)
    print("##")
    for i in nlp_trained(text):
        print(i,i.ent_iob_,i.ent_type_,i.pos_,i.tag_,i.head,i.lang_,i.lemma_)

输出:

#####################
KOYA MOTORS PRIVATE LTD. (MOTORS PRIVATE LTD.,)
##
KOYA O  PROPN NNP LTD en KOYA
MOTORS B ORG PROPN NNP LTD en MOTORS
PRIVATE I ORG PROPN NNP LTD en PRIVATE
LTD I ORG PROPN NNP LTD en LTD
. I ORG PUNCT . LTD en .
#####################
KOYAL MOTORS PRIVATE LTD. (KOYAL MOTORS PRIVATE LTD.,)
##
KOYAL B ORG PROPN NNP LTD en KOYAL
MOTORS I ORG PROPN NNP LTD en MOTORS
PRIVATE I ORG PROPN NNP LTD en PRIVATE
LTD I ORG PROPN NNP LTD en LTD
. I ORG PUNCT . LTD en .
#####################
PUTTAR MOTORS LIMITED (MOTORS LIMITED,)
##
PUTTAR O  NOUN NN LIMITED en puttar
MOTORS B ORG PROPN NNP LIMITED en MOTORS
LIMITED I ORG PROPN NNP LIMITED en LIMITED
#####################
BRENSON MOTORS LIMITED (BRENSON MOTORS LIMITED,)
##
BRENSON B ORG PROPN NNP LIMITED en BRENSON
MOTORS I ORG PROPN NNP LIMITED en MOTORS
LIMITED I ORG PROPN NNP LIMITED en LIMITED
#####################
MITASHI LIMITED ()
##
MITASHI O  PROPN NNP MITASHI en MITASHI
LIMITED O  PROPN NNP MITASHI en LIMITED
#####################
FEDERATION OF KARNATAKA CHAMBERS OF COMMERCE & INDUSTRY (KARNATAKA CHAMBERS OF COMMERCE & INDUSTRY,)
##
FEDERATION O  NOUN NN FEDERATION en federation
OF O  ADP IN FEDERATION en of
KARNATAKA B ORG PROPN NNP CHAMBERS en KARNATAKA
CHAMBERS I ORG NOUN NNS OF en chamber
OF I ORG ADP IN CHAMBERS en of
COMMERCE I ORG PROPN NNP OF en COMMERCE
& I ORG CCONJ CC COMMERCE en &
INDUSTRY I ORG PROPN NNP COMMERCE en INDUSTRY

此问题的可能原因是什么,我该如何纠正?

python nlp spacy ner
1个回答
0
投票
该代码应如下所示:

import random from spacy.gold import GoldParse from cytoolz import partition_all # training data TRAIN_DATA = [ ("Where is ICICI bank located", {"entities": [(9, 18, "ORG")]}), ("I like Thodupuzha and Pala", {"entities": [(7, 16, "LOC"), (22, 25, "LOC")]}), ("Thodupuzha is a tourist place", {"entities": [(0, 9, "LOC")]}), ("Pala is famous for mangoes", {"entities": [(0, 3, "LOC")]}), ("ICICI bank is one of the largest bank in the world", {"entities": [(0, 9, "ORG")]}), ("ICICI bank has a branch in Thodupuzha", {"entities": [(0, 9, "ORG"), (27, 36, "LOC")]}), ] # preparing the revision data revision_data = [] for doc in nlp.pipe(list(zip(*TRAIN_DATA))[0]): tags = [w.tag_ for w in doc] heads = [w.head.i for w in doc] deps = [w.dep_ for w in doc] entities = [(e.start_char, e.end_char, e.label_) for e in doc.ents] revision_data.append((doc, GoldParse(doc, tags=tags, heads=heads, deps=deps, entities=entities))) # preparing the fine_tune_data fine_tune_data = [] for raw_text, entity_offsets in TRAIN_DATA: doc = nlp.make_doc(raw_text) gold = GoldParse(doc, entities=entity_offsets['entities']) fine_tune_data.append((doc, gold)) # training the model n_epoch = 10 batch_size = 2 for i in range(n_epoch): examples = revision_data + fine_tune_data losses = {} random.shuffle(examples) for batch in partition_all(batch_size, examples): docs, golds = zip(*batch) nlp.update(docs, golds, drop=0.0, losses=losses) # finding ner with the updated model nytimes = nlp(sentence) entities = [(i, i.label_, i.label) for i in nytimes.ents] print(entities)

© www.soinside.com 2019 - 2024. All rights reserved.