IOB精度和精度之间的差异

问题描述 投票:3回答:1

我正在做NLTK的一些有关命名实体识别和chunkers的工作。我使用nltk/chunk/named_entity.py重新训练了一个分类器,我得到了以下的方法:

ChunkParse score:
    IOB Accuracy:  96.5%
    Precision:     78.0%
    Recall:        91.9%
    F-Measure:     84.4%

但我不明白在这种情况下IOB精度和精度之间的确切区别是什么。实际上,我在文档(here)上找到了以下特定示例:

IOB标签准确度表示超过三分之一的单词被标记为O,即不在NP块中。但是,由于我们的标记器没有找到任何块,因此其精度,召回率和f度量均为零。

那么,如果IOB精度只是O标签的数量,那么为什么我们没有块和IOB精度同时不是100%,在那个例子中呢?

先感谢您

python nlp nltk precision named-entity-recognition
1个回答
4
投票

维基百科的精确度和准确度之间的差异有一个非常详细的解释(参见https://en.wikipedia.org/wiki/Accuracy_and_precision),简而言之:

accuracy = (tp + tn) / (tp + tn + fp + fn)
precision = tp / tp + fp

回到NLTK,有一个模块调用ChunkScore来计算你系统的accuracyprecisionrecall。这是NLTK为tp,fp,tn,fnaccuracy计算precision的有趣部分,它的确有不同的粒度。

为了准确,NLTK计算使用POS标签和IOB标签正确猜测的令牌总数(NOT CHUNKS !!),然后除以金句中的令牌总数。

accuracy = num_tokens_correct / total_num_tokens_from_gold

为了精确和召回,NLTK计算:

  • True Positives通过计算正确猜测的块数(NOT TOKENS !!!)来计算
  • False Positives通过计算猜测但是错误的块数(NOT TOKENS !!!)来计算。
  • True Negatives通过计算系统没有猜到的块数(NOT TOKENS !!!)来计算。

然后计算精度和召回:

precision = tp / fp + tp
recall = tp / fn + tp

要证明以上几点,请尝试以下脚本:

from nltk.chunk import *
from nltk.chunk.util import *
from nltk.chunk.regexp import *
from nltk import Tree
from nltk.tag import pos_tag

# Let's say we give it a rule that says anything with a [DT NN] is an NP
chunk_rule = ChunkRule("<DT>?<NN.*>", "DT+NN* or NN* chunk")
chunk_parser = RegexpChunkParser([chunk_rule], chunk_node='NP')

# Let's say our test sentence is:
# "The cat sat on the mat the big dog chewed."
gold = tagstr2tree("[ The/DT cat/NN ] sat/VBD on/IN [ the/DT mat/NN ] [ the/DT big/JJ dog/NN ] chewed/VBD ./.")

# We POS tag the sentence and then chunk with our rule-based chunker.
test = pos_tag('The cat sat on the mat the big dog chewed .'.split())
chunked = chunk_parser.parse(test)

# Then we calculate the score.
chunkscore = ChunkScore()
chunkscore.score(gold, chunked)
chunkscore._updateMeasures()

# Our rule-based chunker says these are chunks.
chunkscore.guessed()

# Total number of tokens from test sentence. i.e.
# The/DT , cat/NN , on/IN , sat/VBD, the/DT , mat/NN , 
# the/DT , big/JJ , dog/NN , chewed/VBD , ./.
total = chunkscore._tags_total
# Number of tokens that are guessed correctly, i.e.
# The/DT , cat/NN , on/IN , the/DT , mat/NN , chewed/VBD , ./.
correct = chunkscore._tags_correct
print "Is correct/total == accuracy ?", chunkscore.accuracy() == (correct/total)
print correct, '/', total, '=', chunkscore.accuracy()
print "##############"

print "Correct chunk(s):" # i.e. True Positive.
correct_chunks = set(chunkscore.correct()).intersection(set(chunkscore.guessed()))
##print correct_chunks
print "Number of correct chunks = tp = ", len(correct_chunks)
assert len(correct_chunks) == chunkscore._tp_num
print

print "Missed chunk(s):" # i.e. False Negative.
##print chunkscore.missed()
print "Number of missed chunks = fn = ", len(chunkscore.missed())
assert len(chunkscore.missed()) == chunkscore._fn_num
print 

print "Wrongly guessed chunk(s):" # i.e. False positive.
wrong_chunks = set(chunkscore.guessed()).difference(set(chunkscore.correct()))
##print wrong_chunks
print "Number of wrong chunks = fp =", len(wrong_chunks)
print chunkscore._fp_num
assert len(wrong_chunks) == chunkscore._fp_num
print 

print "Recall = ", "tp/fn+tp =", len(correct_chunks), '/', len(correct_chunks)+len(chunkscore.missed()),'=', chunkscore.recall()

print "Precision =", "tp/fp+tp =", len(correct_chunks), '/', len(correct_chunks)+len(wrong_chunks), '=', chunkscore.precision()
© www.soinside.com 2019 - 2024. All rights reserved.