ValueError:调用PositionalEmbedding.call()时遇到异常

问题描述 投票:0回答:1

我正在使用 Transformer 和 Keras 运行神经机器翻译。 https://www.tensorflow.org/text/tutorials/transformer

pt 是大小为 (64, 79) 的标记化向量。对于以下类,PositionalEmbedding.call() 会抛出错误。

class PositionalEmbedding(tf.keras.layers.Layer):
  def __init__(self, vocab_size, d_model):
    super().__init__()
    self.d_model = d_model
    self.embedding = tf.keras.layers.Embedding(vocab_size, d_model, mask_zero=True) 
    self.pos_encoding = positional_encoding(length=2048, depth=d_model)

  def compute_mask(self, *args, **kwargs):
    return self.embedding.compute_mask(*args, **kwargs)

  def call(self, x):
    length = tf.shape(x)[1]
    x = self.embedding(x)
    # This factor sets the relative scale of the embedding and positonal_encoding.
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x = x + self.pos_encoding[tf.newaxis, :length, :]
    return x

此调用引发错误。

    embed_pt = PositionalEmbedding(vocab_size=tokenizers.pt.get_vocab_size(), d_model=512)
    pt_emb = embed_pt(pt)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-167-44987cede320> in <cell line: 4>()
      2 embed_en = PositionalEmbedding(vocab_size=tokenizers.en.get_vocab_size(), d_model=512)
      3 
----> 4 pt_emb = embed_pt(pt)
      5 en_emb = embed_en(en)

1 frames
<ipython-input-166-e9ab4e283481> in call(self, x)
     11   def call(self, x):
     12     length = tf.shape(x)[1]
---> 13     x = self.embedding(x)
     14     # This factor sets the relative scale of the embedding and positonal_encoding.
     15     x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))

ValueError: Exception encountered when calling PositionalEmbedding.call().

Invalid dtype: <property object at 0x7a6808eb53f0>

Arguments received by PositionalEmbedding.call():
  • x=tf.Tensor(shape=(64, 70), dtype=int64)

我尝试将数据类型更改为int32,但没有成功。

python types embedding transformer-model
1个回答
0
投票

您的 PositionalEmbedding TensorFlow 层中的 d_model 参数似乎遇到了类型问题。该错误表明 d_model 可能被错误地视为属性而不是整数。检查并确保 d_model 正确初始化为整数,而不是属性或函数。您可以通过在调用方法中添加一条打印语句(例如 print("d_model type:", type(self.d_model)) )来调试它,然后再使用它来验证其类型。这将有助于确认 d_model 是整数并且不会导致问题。

import tensorflow as tf
   
class PositionalEmbedding(tf.keras.layers.Layer):
    def __init__(self, vocab_size, d_model):
        super(PositionalEmbedding, self).__init__()
        self.d_model = d_model  # Ensure this is always an integer
        self.embedding = tf.keras.layers.Embedding(vocab_size, d_model, mask_zero=True)
        self.pos_encoding = positional_encoding(length=2048, depth=d_model)

    def compute_mask(self, *args, **kwargs):
        return self.embedding.compute_mask(*args, **kwargs)

    def call(self, x):
        if not isinstance(self.d_model, int):
            raise ValueError(f"Expected d_model to be an integer, got {type(self.d_model)} instead")
        
        length = tf.shape(x)[1]
        x = self.embedding(x)
        
        # Scale the embeddings by the square root of d_model
        x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
        
        # Add positional encoding
        x += self.pos_encoding[:, :length, :]
        return x

def positional_encoding(length, depth):
"""Helper function to generate positional encoding"""
    # You should replace it with your actual function to generate positional encodings.
    return tf.random.normal([1, length, depth])

# Example usage:
try:
    layer = PositionalEmbedding(vocab_size=1000, d_model=512)
    dummy_input = tf.random.uniform((1, 64), dtype=tf.int32, maxval=1000)  # Example input
    output = layer(dummy_input)
    print(output)
except ValueError as e:
    print(e)
© www.soinside.com 2019 - 2024. All rights reserved.