强化学习成本函数

问题描述 投票:0回答:1

Newb问题我正在用TensorFlow编写一个OpenAI Gym pong播放器,到目前为止,它已经能够基于随机初始化创建网络,以便随机返回以向上或向下移动播放器板。

在时代结束后(计算机赢了21场比赛),我收集了一组观察,移动和得分。游戏的最终观察得到分数,并且可以基于贝尔曼方程对每个先前的观察进行评分。

现在我的问题还有我还不了解的问题:如何计算成本函数,使其作为向后传播的起始梯度传播?我完全通过有监督的学习得到它,但在这里我们没有任何标记可以再次得分。

我将如何开始优化网络?

也许指向现有代码或某些文献的指针会有所帮助。

这是我计算奖励的地方:

def compute_observation_rewards(self, gamma, up_score_probabilities):
        """
        Applies Bellman equation and determines reward for each stored observation
        :param gamma: Learning decay
        :param up_score_probabilities: Probabilities for up score
        :returns: List of scores for each move
        """

        score_sum = 0
        discounted_rewards = []
        # go backwards through all observations
        for i, p in enumerate(reversed(self._states_score_action)):
            o = p[0]
            s = p[1]

            if s != 0:
                score_sum = 0

            score_sum = score_sum * gamma + s
            discounted_rewards.append(score_sum)

        # # normalize scores
        discounted_rewards = np.array(discounted_rewards)
        discounted_rewards -= np.mean(discounted_rewards)
        discounted_rewards /= np.std(discounted_rewards)

        return discounted_rewards

以下是我的网络:

with tf.variable_scope('NN_Model', reuse=tf.AUTO_REUSE):

        layer1 = tf.layers.conv2d(inputs,
                                3,
                                3,
                                strides=(1, 1),
                                padding='valid',
                                data_format='channels_last',
                                dilation_rate=(1, 1),
                                activation= tf.nn.relu, 
                                use_bias=True,
                                bias_initializer=tf.zeros_initializer(),
                                trainable=True,
                                name='layer1'
                            )
        # (N - F + 1) x (N - F + 1)
        # => layer1 should be 
        # (80 - 3 + 1) * (80 - 3 + 1) = 78 x 78

        pool1 = tf.layers.max_pooling2d(layer1,
                                        pool_size=5,
                                        strides=2,
                                        name='pool1')

        # int((N - f) / s +1) 
        # (78 - 5) / 2 + 1 = 73/2 + 1 = 37

        layer2 = tf.layers.conv2d(pool1,
                                5,
                                5,
                                strides=(2, 2),
                                padding='valid',
                                data_format='channels_last',
                                dilation_rate=(1, 1),
                                activation= tf.nn.relu, 
                                use_bias=True,
                                kernel_initializer=tf.random_normal_initializer(),
                                bias_initializer=tf.zeros_initializer(),
                                trainable=True,
                                name='layer2',
                                reuse=None
                            )

        # ((N + 2xpadding - F) / stride + 1) x ((N + 2xpadding - F) / stride + 1)
        # => layer1 should be 
        # int((37 + 0 - 5) / 2) + 1 
        # 16 + 1 = 17

        pool2 = tf.layers.max_pooling2d(layer2,
                                        pool_size=3,
                                        strides=2,
                                        name='pool2')

        # int((N - f) / s +1) 
        # (17 - 3) / 2 + 1 = 7 + 1 = 8

        flat1 = tf.layers.flatten(pool2, 'flat1')

        # Kx64

        full1 = tf.contrib.layers.fully_connected(flat1,
                                            num_outputs=1,
                                            activation_fn=tf.nn.sigmoid,
                                            weights_initializer=tf.contrib.layers.xavier_initializer(),
                                            biases_initializer=tf.zeros_initializer(),
                                            trainable=True,
                                            scope=None
                                        )
reinforcement-learning tensorflow backpropagation gradient-descent
1个回答
1
投票

您正在寻找的算法称为REINFORCE。我建议阅读Sutton and Barto's RL book的第13章。

这是书中的伪代码。 enter image description here

这里,θ是神经网络的权重集。如果您不熟悉其他一些符号,我建议您阅读上述书籍的第3章。它涵盖了基本问题的制定。

© www.soinside.com 2019 - 2024. All rights reserved.