DQN算法不会收敛于CartPole-v0

问题描述 投票:0回答:1

简短描述我的模型

我试图在Python中编写我自己的DQN算法,使用Tensorflow后面的文章(Mnih et al., 2015)。在train_DQN函数中,我定义了训练过程,而DQN_CartPole用于定义函数逼近(简单的3层神经网络)。对于损失函数,实现Huber损失或MSE,然后进行梯度限幅(在-1和1之间)。然后,我通过复制主网络中的权重,实现了软更新方法,而不是目标网络的硬更新。

我在CartPole环境(OpenAI健身房)上尝试它,但奖励并没有像其他人的算法那样改进,例如keras-rl。任何帮助将不胜感激。

reward over timestep

如果可能的话,你能看一下源代码吗?

class Parameters:
    def __init__(self, mode=None):
        assert mode != None
        print("Loading Params for {} Environment".format(mode))
        if mode == "Atari":
            self.state_reshape = (1, 84, 84, 1)
            self.num_frames = 1000000
            self.memory_size = 10000
            self.learning_start = 10000
            self.sync_freq = 1000
            self.batch_size = 32
            self.gamma = 0.99
            self.update_hard_or_soft = "soft"
            self.soft_update_tau = 1e-2
            self.epsilon_start = 1.0
            self.epsilon_end = 0.01
            self.decay_steps = 1000
            self.prioritized_replay_alpha = 0.6
            self.prioritized_replay_beta_start = 0.4
            self.prioritized_replay_beta_end = 1.0
            self.prioritized_replay_noise = 1e-6
        elif mode == "CartPole":
            self.state_reshape = (1, 4)
            self.num_frames = 10000
            self.memory_size = 20000
            self.learning_start = 100
            self.sync_freq = 100
            self.batch_size = 32
            self.gamma = 0.99
            self.update_hard_or_soft = "soft"
            self.soft_update_tau = 1e-2
            self.epsilon_start = 1.0
            self.epsilon_end = 0.01
            self.decay_steps = 500
            self.prioritized_replay_alpha = 0.6
            self.prioritized_replay_beta_start = 0.4
            self.prioritized_replay_beta_end = 1.0
            self.prioritized_replay_noise = 1e-6


class _DQN:
    """
    Boilerplate for DQN Agent
    """

    def __init__(self):
        """
        define the deep learning model here!

        """
        pass

    def predict(self, sess, state):
        """
        predict q-values given a state

        :param sess:
        :param state:
        :return:
        """
        return sess.run(self.pred, feed_dict={self.state: state})

    def update(self, sess, state, action, Y):
        feed_dict = {self.state: state, self.action: action, self.Y: Y}
        _, loss = sess.run([self.train_op, self.loss], feed_dict=feed_dict)
        # print(action, Y, sess.run(self.idx_flattened, feed_dict=feed_dict))
        return loss


class DQN_CartPole(_DQN):
    """
    DQN Agent for CartPole game
    """

    def __init__(self, scope, env, loss_fn ="MSE"):
        self.scope = scope
        self.num_action = env.action_space.n
        with tf.variable_scope(scope):
            self.state = tf.placeholder(shape=[None, 4], dtype=tf.float32, name="X")
            self.Y = tf.placeholder(shape=[None], dtype=tf.float32, name="Y")
            self.action = tf.placeholder(shape=[None], dtype=tf.int32, name="action")

            fc1 = tf.keras.layers.Dense(16, activation=tf.nn.relu)(self.state)
            fc2 = tf.keras.layers.Dense(16, activation=tf.nn.relu)(fc1)
            fc3 = tf.keras.layers.Dense(16, activation=tf.nn.relu)(fc2)
            self.pred = tf.keras.layers.Dense(self.num_action, activation=tf.nn.relu)(fc3)

            # indices of the executed actions
            self.idx_flattened = tf.range(0, tf.shape(self.pred)[0]) * tf.shape(self.pred)[1] + self.action

            # passing [-1] to tf.reshape means flatten the array
            # using tf.gather, associate Q-values with the executed actions
            self.action_probs = tf.gather(tf.reshape(self.pred, [-1]), self.idx_flattened)

            if loss_fn == "huber_loss":
                # use huber loss
                self.losses = tf.subtract(self.Y, self.action_probs)
                self.loss = huber_loss(self.losses)
            elif loss_fn == "MSE":
                # use MSE
                self.losses = tf.squared_difference(self.Y, self.action_probs)
                self.loss = tf.reduce_mean(self.losses)
            else:
                assert False

            # you can choose whatever you want for the optimiser
            # self.optimizer = tf.train.RMSPropOptimizer(0.00025, 0.99, 0.0, 1e-6)
            self.optimizer = tf.train.AdamOptimizer()

            # to apply Gradient Clipping, we have to directly operate on the optimiser
            # check this: https://www.tensorflow.org/api_docs/python/tf/train/Optimizer#processing_gradients_before_applying_them
            self.grads_and_vars = self.optimizer.compute_gradients(self.loss)
            self.clipped_grads_and_vars = [(ClipIfNotNone(grad, -1., 1.), var) for grad, var in self.grads_and_vars]
            self.train_op = self.optimizer.apply_gradients(self.clipped_grads_and_vars)



def train_DQN(main_model, target_model, env, replay_buffer, policy, params):
    """
    Train DQN agent which defined above

    :param main_model:
    :param target_model:
    :param env:
    :param params:
    :return:
    """

    # log purpose
    losses, all_rewards, cnt_action = [], [], []
    episode_reward, index_episode = 0, 0

    with tf.Session() as sess:
        # initialise all variables used in the model
        sess.run(tf.global_variables_initializer())
        state = env.reset()
        start = time.time()
        for frame_idx in range(1, params.num_frames + 1):
            action = policy.select_action(sess, target_model, state.reshape(params.state_reshape))
            cnt_action.append(action)
            next_state, reward, done, _ = env.step(action)
            replay_buffer.add(state, action, reward, next_state, done)

            state = next_state
            episode_reward += reward

            if done:
                index_episode += 1
                state = env.reset()
                all_rewards.append(episode_reward)

                if frame_idx > params.learning_start and len(replay_buffer) > params.batch_size:
                    states, actions, rewards, next_states, dones = replay_buffer.sample(params.batch_size)
                    next_Q = target_model.predict(sess, next_states)
                    Y = rewards + params.gamma * np.max(next_Q, axis=1) * np.logical_not(dones)
                    loss = main_model.update(sess, states, actions, Y)

                    # Logging and refreshing log purpose values
                    losses.append(np.mean(loss))

                    logging(frame_idx, params.num_frames, index_episode, time.time()-start, episode_reward, np.mean(loss), cnt_action)

                episode_reward = 0
                cnt_action = []
                start = time.time()

            if frame_idx > params.learning_start and frame_idx % params.sync_freq == 0:
                # soft update means we partially add the original weights of target model instead of completely
                # sharing the weights among main and target models
                if params.update_hard_or_soft == "hard":
                    sync_main_target(sess, main_model, target_model)
                elif params.update_hard_or_soft == "soft":
                    soft_target_model_update(sess, main_model, target_model, tau=params.soft_update_tau)


    return all_rewards, losses

修改

  • 女性 - > np.logical_not(dones)
  • np.argmax - > np.max
  • 将MSE与huber_loss分开
python tensorflow reinforcement-learning
1个回答
1
投票

简单地看,似乎dones变量是二进制向量,其中1表示完成,而0表示未完成。

然后你在这里使用dones

Y = rewards + params.gamma * np.argmax(next_Q, axis=1) * dones

因此,对于所有终止转换,您在遵循该剧集其余部分(即零)的策略时添加预期累积奖励。对于所有非终止转换,您不会添加预期累积奖励。

我认为你的意思是反过来这样做,也许用dones在上面的代码行中交换np.logical_not(dones)

此外,现在我看一下,这条线似乎还有另一个主要问题。 np.argmax(next_Q, axis=1)返回next_Q向量中的最大值的索引,而不是实际的最大值。你需要np.maximum(next_Q, axis=1)(IIRC)来获得下一个州行动的最大期望奖励。

编辑:损失函数也是非常奇怪的定义。你正在混合Huber Loss和Mean-Squared-Error。如果要使用huber_loss或MSE,只需根据预期值和预测值之间的差异计算它们。您似乎正在做两件事,这当然不是常见的损失函数。例如,您使用Huber Loss的模型丢失应该只是:

self.loss = tf.reduce_mean(huber_loss(abs(self.Y - self.action_probs)))
© www.soinside.com 2019 - 2024. All rights reserved.