Google Colab:导入 TFBertModel 时出错

问题描述 投票:0回答:1

我在 Google Colab 中导入 TFBertModel 时出现错误,两个月后一切正常。

from transformers import TFBertModel

我收到:

    AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
   1389         try:
-> 1390             return importlib.import_module("." + module_name, self.__name__)
   1391         except Exception as e:

25 frames
AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
   1390             return importlib.import_module("." + module_name, self.__name__)
   1391         except Exception as e:
-> 1392             raise RuntimeError(
   1393                 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
   1394                 f" traceback):\n{e}"

RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'


The version of keras is 3.1.1, tensorflow 2.16.1, transformers version 4.38.2
tensorflow keras google-colaboratory huggingface-transformers
1个回答
0
投票

您可能想要添加更多代码或更好地解释您的环境。对于运行时 Python3 + CPU,我今天能够在 Google Colab 中很好地运行以下 TFBertModel:

import tensorflow as tf
from transformers import BertTokenizer, TFBertModel

# Instantiate the tokenizer and the model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')

# For fun, let's Encode some text
input_texts = ["Hello World, I'm using Bert!", "I am also using Bert in a sentence."]
encoding = tokenizer(input_texts, return_tensors='tf', padding=True, truncation=True)

# Get the BERT representations
outputs = model(encoding['input_ids'], attention_mask=encoding['attention_mask'])
last_hidden_state = outputs.last_hidden_state

print(last_hidden_state)

输出:

Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertModel: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias']
- This IS expected if you are initializing TFBertModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertModel were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
tf.Tensor(
[[[ 0.05163623  0.25609216  0.05970613 ... -0.20756713  0.06182499
    0.7532056 ]
...
© www.soinside.com 2019 - 2024. All rights reserved.