经过训练的 RL Cartpole 模型使用稳定基线产生的奖励很差

问题描述 投票:0回答:1

我正在尝试使用 stablebaseline3 在 cartpole 环境上实现 A2C 算法。 虽然训练似乎很成功并获得了所需的奖励,但当我尝试使用该模型时,奖励似乎很低。这是我的代码。我做错了什么?

import os
import gymnasium as gym
import numpy as np

from stable_baselines3 import A2C
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
from stable_baselines3.common.env_util import make_vec_env

env_id = "CartPole-v1"
env = gym.make(env_id)

s_size = env.observation_space.shape
a_size = env.action_space

print("_____OBSERVATION SPACE_____ \n")
print("The State Space is: ", s_size)
print("Sample observation", env.observation_space.sample()) # Get a random observation

观察空间

状态空间为: (4,) 样本观察[-2.3014314e+00 4.4097112e+37 -4.1089469e-01 2.7118910e+38]

envs = make_vec_env(env_id,seed=1, n_envs=4)
envs = VecNormalize(envs, norm_obs=True, norm_reward=True, clip_obs=10.)
model = A2C(policy = "MlpPolicy",env = envs, verbose=1)
model.learn(15_000)

在此步骤之后,我保存模型并重新加载以进行评估

model.save("a2c-"+env_id)
envs.save("vec_normalize.pkl")

当我加载保存的模型进行评估时,它产生的平均奖励为 500

# Load the saved statistics
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
eval_env = VecNormalize.load("vec_normalize.pkl", eval_env)

# We need to override the render_mode
eval_env.render_mode = "rgb_array"

#  do not update them at test time
eval_env.training = False
# reward normalization is not needed at test time
eval_env.norm_reward = False

# Load the agent
model = A2C.load("a2c-"+env_id)
mean_reward, std_reward = evaluate_policy(model, eval_env)

print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")

平均奖励 = 500.00 +/- 0.00

然而,当我尝试自己测试模型时,回报却很惨淡

for epi in range(10):
    score = 0
    state = env.reset()[0]
    done = False
    while not done:
        
        a , _ = model.predict(state)
        state_, r, done , _, _  = env.step(a)
        score += r
        state = np.copy(state_)
        env.render()
    print(f"Episode {epi} score {score}")

env.close()

第0集得分71.0
第一集评分70.0
第2集评分83.0
第3集评分62.0
第4集评分63.0
第5集评分59.0
第6集评分52.0
第7集评分54.0
第8集评分60.0
第9集评分69.0

python-3.x reinforcement-learning stable-baselines
1个回答
0
投票

我终于能够解决这个问题了。看来我需要在 eval_env 本身中模拟测试

for epi in range(10):
    
    obs = eval_env.reset()
    done = False
    score = 0
    img = eval_env.render(mode="rgb_array")
    images.append(img)
    while not done:
        action, _states = model.predict(obs)
        obs, rewards, dones, info = eval_env.step(action)
        done = dones[0]
        score +=rewards[0]
    print(score)

494.0
297.0
359.0
402.0
406.0
500.0
500.0
500.0
371.0
371.0

© www.soinside.com 2019 - 2024. All rights reserved.