我有3个并行MLP,并希望在Keras中获得以下内容:
Out = W1 * Out_MLP1 + W2 * Out_MLP2 + W3 * Out_MLP3
其中Out_MLP是每个MLP的输出层,并且尺寸为(10,),并且W1,W2和W3是三个可训练的权重(浮点数),它们满足以下条件:
W1 + W2 + W3 = 1
使用Keras功能API实施此功能的最佳方法是什么?如果我们有N个并行层怎么办?
您需要在一组可学习的权重上应用softmax,以使它们的总和等于1。
我们在自定义图层中初始化可学习的权重。该层接收MLP的输出,并按照逻辑W1 * Out_MLP1 + W2 * Out_MLP2 + W3 * Out_MLP3
对其进行组合。输出将是形状为(10,)的张量。
class W_ADD(Layer):
def __init__(self, n_output):
super(W_ADD, self).__init__()
self.W = tf.Variable(initial_value=tf.random.uniform(shape=[1,1,n_output], minval=0, maxval=1),
trainable=True) # (1,1,n_inputs)
def call(self, inputs):
# inputs is a list of tensor of shape [(n_batch, n_feat), ..., (n_batch, n_feat)]
# expand last dim of each input passed [(n_batch, n_feat, 1), ..., (n_batch, n_feat, 1)]
inputs = [tf.expand_dims(i, -1) for i in inputs]
inputs = Concatenate(axis=-1)(inputs) # (n_batch, n_feat, n_inputs)
weights = tf.nn.softmax(self.W, axis=-1) # (1,1,n_inputs)
# weights sum up to one on last dim
return tf.reduce_sum(weights*inputs, axis=-1) # (n_batch, n_feat)
在此虚拟示例中,我创建了一个具有3个并行MLP的网络
inp1 = Input((100))
inp2 = Input((100))
inp3 = Input((100))
x1 = Dense(32, activation='relu')(inp1)
x2 = Dense(32, activation='relu')(inp2)
x3 = Dense(32, activation='relu')(inp3)
x1 = Dense(10, activation='linear')(x1)
x2 = Dense(10, activation='linear')(x2)
x3 = Dense(10, activation='linear')(x3)
mlp_outputs = [x1,x2,x3]
out = W_ADD(n_output=len(mlp_outputs))(mlp_outputs)
m = Model([inp1,inp2,inp3], out)
m.compile('adam','mse')
X1 = np.random.uniform(0,1, (1000,100))
X2 = np.random.uniform(0,1, (1000,100))
X3 = np.random.uniform(0,1, (1000,100))
y = np.random.uniform(0,1, (1000,10))
m.fit([X1,X2,X3], y, epochs=10)
如您所见,在N个并行层的情况下,这很容易泛化