ValueError:维度必须是 2 但对于 '{{node lambda/transpose}} 是 3

问题描述 投票:0回答:0

我试图在 dqn 模型中包含多头注意通信层。 这是代码:

def _create_model(self,lr):
        num_heads =2
        input1 = Input(shape=(self.input_dims,))
        input2 = Input(shape=(msg_dim,))

        dense_layer1 = Dense(64, activation="relu")(input2)

        # Multi-Head Attention
        heads = []
        for i in range(num_heads):
            query = Dense(32)(input2)
            key = Dense(32)(input2)
            value = Dense(32)(input2)
            attention_logits = Lambda(lambda x: K.batch_dot(x[0], K.permute_dimensions(x[1], (0, 2, 1))))([query, key])
            attention_output = Softmax(axis=2)(attention_logits)
            head = Lambda(lambda x: K.batch_dot(x[0], x[1]))([attention_output, value ])
            heads.append(head)

        multi_head = Concatenate()(heads)
        concatenated_inputs = concatenate([input1, multi_head])
        dense_layer3 = Dense(32, activation="relu")(concatenated_inputs)
        output_layer = Dense(self.n_actions)(dense_layer3)
        model = Model(inputs=concatenated_inputs, outputs=output_layer)
        model.compile(loss="mse", optimizer=Adam(lr=lr))
        return model
ValueError: Dimension must be 2 but is 3 for '{{node lambda/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](Placeholder_1, lambda/transpose/perm)' with input shapes: [?,32], [3].

错误在lamda层。但是,我无法理解它的确切原因。 任何人都可以提供帮助,我将不胜感激

消息应与观察结果一起输入模型。因此模型应该有一个注意力层

reinforcement-learning attention-model self-attention multi-agent-reinforcement-learning multihead-attention
© www.soinside.com 2019 - 2024. All rights reserved.