加载预先训练的 DQN 模型导致性能较低的问题

问题描述 投票:0回答:1

背景: 我一直在为强化学习项目使用深度 Q 网络 (DQN) 模型。训练模型后,我将其保存并稍后加载以进行进一步评估。但是,我注意到加载模型的性能与原始预训练模型的性能相比明显较差。

问题: 当我加载经过训练的模型时,似乎出现了主要问题;性能急剧下降,模型表现得好像没有从训练过程中学到任何东西。这令人费解,因为模型在保存之前表现良好。

class DQLAgent():
    def __init__(self, env, model_path=None):
        self.env = env
        self.state_size = env.observation_space.shape[0]
        self.action_size = env.action_space.n
        self.gamma = 0.95
        self.learning_rate = 0.001
        self.epsilon = 1
        self.epsilon_decay = 0.995
        self.epsilon_min = 0.01
        self.memory = deque(maxlen=2000)
        
        if model_path:
            self.model = load_model(model_path)  # Load model if path provided
        else:
            self.model = self.build_model()  # Build new model otherwise

    def build_model(self):
        model = Sequential()
        model.add(Dense(48, input_dim=self.state_size, activation='tanh'))
        model.add(Dense(self.action_size, activation='linear'))
        model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate=self.learning_rate))
        return model

    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

    def act(self, state):
        if np.random.rand() <= self.epsilon:
            return self.env.action_space.sample()
        else:
            act_values = self.model.predict(state,verbose = 0)
            return np.argmax(act_values[0])

    def replay(self, batch_size):
        if len(self.memory) < batch_size:
            return
        minibatch = random.sample(self.memory, batch_size)
        for state, action, reward, next_state, done in minibatch:
            target = reward if done else reward + self.gamma * np.amax(self.model.predict(next_state,verbose = 0)[0])
            train_target = self.model.predict(state,verbose = 0)
            train_target[0][action] = target
            self.model.fit(state, train_target, verbose=0)

    def adaptiveEGreedy(self):
        if self.epsilon > self.epsilon_min:
            self.epsilon *= self.epsilon_decay

# Initialize gym environment and the agent
env = gym.make('CartPole-v1')
agent = DQLAgent(env)

episodes = 50
batch_size = 32
round_results = []

for e in range(episodes):
    state = env.reset()
    state = np.reshape(state, [1, 4])
    total_reward = 0

    while True:
        action = agent.act(state)
        next_state, reward, done, _ = env.step(action)
        next_state = np.reshape(next_state, [1, 4])
        agent.remember(state, action, reward, next_state, done)
        state = next_state
        agent.replay(batch_size)
        agent.adaptiveEGreedy()
        total_reward += reward

        if done:
            print(f'Episode: {e+1}, Total reward: {total_reward}')
            round_results.append(total_reward)
            break

agent.model.save('dql_cartpole_model.keras')

展示有效性

model_path = 'dql_cartpole_model.keras'  # Update this path

env = gym.make('CartPole-v1')
agent = DQLAgent(env, model_path=model_path)  

episodes = 100
round_results = []

for e in range(episodes):
    state = env.reset()
    state = np.reshape(state, [1, 4])
    total_reward = 0

    while True:
        action = agent.act(state)
        next_state, reward, done, _ = env.step(action)
        next_state = np.reshape(next_state, [1, 4])
        state = next_state
        total_reward += reward

        if done:
            print(f'Episode: {e+1}, Total reward: {total_reward}')
            round_results.append(total_reward)
            break

# Plot the rewards
plt.plot(round_results)
plt.title('Rewards per Episode')
plt.xlabel('Episode')
plt.ylabel('Total Reward')
plt.show()
python tensorflow
1个回答
0
投票

如果您使用

agent = DQLAgent(env, model_path=model_path)
加载模型,
self.epsilon
将设置(返回)为
1
,这意味着模型始终选择随机操作,并且几乎总是在衰减的第一个时期。您应该将 epsilon 与模型一起保存,或者为经过训练的模型设置评估标志,这样您始终让模型选择操作并放弃随机探索的机会。
在您当前的实现中,您的模型也会丢失其
memory
,因此,如果您保存并加载模型并想要进一步训练它,它基本上会再次从 0 开始,并带有
epsilon
memory

© www.soinside.com 2019 - 2024. All rights reserved.