bert-base-uncased tokenizer 在句子中丢失了单词

问题描述 投票:0回答:1

这是我的代码。我想得到句子中每个单词的嵌入。如果这个词被分成几个子词,我会嵌入第一个子词。所以嵌入的数量应该与句子的长度相同。但是分词器丢失了一些词。

        tokenizer = BertModel.from_pretrained('bert-based-uncased')

        ..........

        words = inst.ori_words # list type: represent a sentence split by words 
        orig_to_tok_index = []
        res = tokenizer.encode_plus(words, is_split_into_words=True)
        subword_idx2word_idx = res.word_ids(batch_index=0)
        prev_word_idx = -1
        for i, mapped_word_idx in enumerate(subword_idx2word_idx):
            """
            Note: by default, we use the first wordpiece/subword token to represent the word
            If you want to do something else (e.g., use last wordpiece to represent), modify them here.
            """
            if mapped_word_idx is None:## cls and sep token
                continue
            if mapped_word_idx != prev_word_idx: 
                ## because we take the first subword to represent the whold word
                orig_to_tok_index.append(i)
                prev_word_idx = mapped_word_idx
        print(words)
        print(subword_idx2word_idx)
        print(orig_to_tok_index)
        assert len(orig_to_tok_index) == len(words)

这里是特例

words: ['mossbauer', 'spectroscopy', 'has', 'been', 'used', 'to', 'study', 'the', 'R3(Fe,Ti)29', ',', '(', 'r', '\ue5fb', 'Nd', ',', 'Sm', ')', 'compounds', '.']
subword_idx2word_idx: [None, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 9, 10, 11, 13, 13, 14, 15, 16, 17, 18, None]
orig_to_tok_index: [1, 4, 5, 6, 7, 8, 9, 10, 11, 19, 20, 21, 22, 24, 25, 26, 27, 28]

这段代码是用在一个序列标注任务中,所以我需要保持句子的长度。我的数据集中有一些 **Unicode **char。

在输出 subword_idx2word_idx 中,我找不到 wordidx 12。分词器丢失了原句中的单词 '\ue5fb'。如何解决诸如分词器自动替换未知单词等问题。

如果我使用

tokenizer = RobertaModel.from_pretrained('roberta-base') 
,它工作正常。但是现在我被要求使用'bert-based-uncased'。

我想通过代码可以正常工作,也许我有以下想法:

  1. bert-uncased 会丢弃所有非 ACSII 字符吗?如果是这样,我可以用另一个特殊标记在预处理中替换它们吗?这是什么令牌? '[unk]' '[UNK]' 等等

  2. 我可以让tokenzier自动做上面的事情吗

pytorch tokenize
1个回答
0
投票

您可以使用

unk_token
属性修改标记器配置以自动处理未知标记。你可以试试这个代码:

from transformers import BertTokenizer, BertModel

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Your code
words = ['mossbauer', 'spectroscopy', 'has', 'been', 'used', 'to', 'study', 'the', 'R3(Fe,Ti)29', ',', '(', 'r', '\ue5fb', 'Nd', ',', 'Sm', ')', 'compounds', '.']

# Replace unknown tokens with the 'unk_token' attribute
def handle_unknown_tokens(tokenizer, words):
    new_words = []
    for word in words:
        if tokenizer.tokenize(word) == []:
            new_words.append(tokenizer.unk_token)
        else:
            new_words.append(word)
    return new_words

words = handle_unknown_tokens(tokenizer, words)
orig_to_tok_index = []
res = tokenizer.encode_plus(words, is_split_into_words=True)
subword_idx2word_idx = res.word_ids(batch_index=0)
prev_word_idx = -1
for i, mapped_word_idx in enumerate(subword_idx2word_idx):
    if mapped_word_idx is None:## cls and sep token
        continue
    if mapped_word_idx != prev_word_idx: 
        ## because we take the first subword to represent the whold word
        orig_to_tok_index.append(i)
        prev_word_idx = mapped_word_idx
print(words)
print(subword_idx2word_idx)
print(orig_to_tok_index)
assert len(orig_to_tok_index) == len(words)

handle_unknown_tokens
函数检查分词器在分词时是否返回空列表。如果是,则该词将替换为
unk_token
。这将自动处理未知标记并保持句子的长度。

© www.soinside.com 2019 - 2024. All rights reserved.