Tensorflow:更改影响反向传播的激活

问题描述 投票:1回答:1

在Tensorflow中是否有一种方法可以在反向传播期间将隐藏节点激活更改为不同的值?也就是说,假设层中的节点在前向传播期间输出值“a1”。然后在反向传播期间,当渐变更新到达此节点时,我希望它使用不同的值作为激活(比如说'a2'),这样整个反向传播过程就会像前向传播过程中的'a2'那样发生。

我知道我们可以创建/修改自定义渐变,但在这里我需要在backprop期间替换隐藏节点激活的值。

tensorflow backpropagation
1个回答
0
投票

它可以通过多种方式实现。最简单的方法是使用带有布尔占位符的tf.cond(),在前向和后向传递期间,您将为其提供不同的值(TrueFalse)。以下示例在前向传球期间使用tf.nn.relu(),在反向传播期间使用tf.nn.sigmoid()

import tensorflow as tf
import numpy as np

x_train = np.array([[-1.551, -1.469], [1.022, 1.664]], dtype=np.float32)
y_train = np.array([1, 0], dtype=int)

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.int32, shape=[None])

with tf.name_scope('network'):
    fc1 = tf.layers.dense(x, units=2)

    # `isforward` is a placeholder that defines what activation
    # are you going to use. If `True` `tf.nn.relu` is used.
    # Otherwise, `tf.nn.sigmoid` is used. 
    isforward = tf.placeholder_with_default(True, shape=())

    activation = tf.cond(isforward,
                         true_fn=lambda: tf.nn.relu(fc1), # forward pass
                         false_fn=lambda: tf.nn.sigmoid(fc1)) # backprop
    logits = tf.layers.dense(activation, units=2)

with tf.name_scope('loss'):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss_fn = tf.reduce_mean(xentropy)

with tf.name_scope('optimizer'):
    optimizer = tf.train.GradientDescentOptimizer(loss_fn)
    train_op = optimizer.minimize(loss_fn)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    loss_val = sess.run(loss_fn, feed_dict={x:x_train,
                                            y:y_train,
                                            isforward:True}) # forward
    _ = sess.run(train_op, feed_dict={x:x_train,
                                      y:y_train,
                                      isforward:False}) # backward
© www.soinside.com 2019 - 2024. All rights reserved.