ner 的 Roberta transformer 给出索引超出范围错误

问题描述 投票:0回答:1

我在下面有一个函数可以标记和对齐我的标签,但它给我一个错误:

def tokenize_and_align_labels(examples, label_all_tokens=True): 
    tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) 
    labels = [] 
    for i, label in enumerate(examples["ner_tags"]): 
        word_ids = tokenized_inputs.word_ids(batch_index=i) 
        # word_ids() => Return a list mapping the tokens
        # to their actual word in the initial sentence.
        # It Returns a list indicating the word corresponding to each token. 
        previous_word_idx = None 
        label_ids = []
        # Special tokens like `<s>` and `<\s>` are originally mapped to None 
        # We need to set the label to -100 so they are automatically ignored in the loss function.
        for word_idx in word_ids: 
            if word_idx is None: 
                # set –100 as the label for these special tokens
                label_ids.append(-100)
            # For the other tokens in a word, we set the label to either the current label or -100, depending on
            # the label_all_tokens flag.
            elif word_idx != previous_word_idx:
                # if current word_idx is != prev then its the most regular case
                # and add the corresponding token                 
                label_ids.append(label[word_idx]) 
            else: 
                # to take care of sub-words which have the same word_idx
                # set -100 as well for them, but only if label_all_tokens == False
                label_ids.append(label[word_idx] if label_all_tokens else -100) 
                # mask the subword representations after the first subword
                 
            previous_word_idx = word_idx 
        labels.append(label_ids) 
    tokenized_inputs["labels"] = labels 
    return tokenized_inputs 

我找到了导致错误的行:

word_ids = tokenized_inputs.word_ids(batch_index=1)

这是产生的错误:

我的标记化输入,如果单独运行,没有调用函数工作正常,如图所示:

谁能帮我解决这个错误?我在这上面花了 3 个小时,但没有用。谢谢!

为了更好的解释,这里还有 colab 文件:https://colab.research.google.com/drive/1UJtc8TcuyCyFURKM1txYsqF1WKG_H6jZ#scrollTo=wc6AA6FMqDNq&uniqifier=1

python named-entity-recognition huggingface roberta
1个回答
0
投票

tokenize_and_align_labels
期望收到多个示例,即您的
examples["tokens"]
应该是一个句子列表,而现在它是一个句子。如果你看一下文档,他们的
example = wnut["train"][0]
是一个句子,这意味着他们的
examples
是一个句子列表。

© www.soinside.com 2019 - 2024. All rights reserved.