n_state,reward,done,info = env.step(action)返回值错误

问题描述 投票:0回答:1
episodes = 10
for episode in range(1, episodes+1):
    state = env.reset()
    done = False
    score = 0 
    
    while not done:
        env.render()
        action = random.choice([0,1])
        n_state, reward, done, info = env.step(action)
        score+=reward
    print('Episode:{} Score:{}'.format(episode, score))

n_state,reward,done,info = env.step(action)行返回此错误:

ValueError                                Traceback (most recent call last)
Cell In[51], line 10
      8     env.render()
      9     action = random.choice([0,1])
---> 10     n_state, reward, done, info = env.step(action)
     11     score+=reward
     12 print('Episode:{} Score:{}'.format(episode, score))

ValueError: too many values to unpack (expected 4)

此代码出现在教程视频中,似乎可以工作,但总是为我返回此错误。

import os
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.evaluation import evaluate_policy
environment_name = "CartPole-v0"
env = gym.make(environment_name)
episodes = 10
for episode in range(1, episodes+1):
    state = env.reset()
    done = False
    score = 0 
    
    while not done:
        env.render()
        action = random.choice([0,1])
        n_state, reward, done, info = env.step(action)
        score+=reward
    print('Episode:{} Score:{}'.format(episode, score))
python machine-learning reinforcement-learning openai-gym stable-baselines
1个回答
0
投票

较新的

gym
版本使用 5 元组表示
env.step(action)
的输出,即
state
reward
terminated
truncated
info
truncated
是一个布尔值,表示环境的意外结束,例如时间限制或不存在的状态。后果是相同的,主体-环境循环应该结束。

因此你真正想做的是:

while not (done or truncated):
    env.render()
    action = random.choice([0,1])
    n_state, reward, done, truncated, info = env.step(action)
    score+=reward
print('Episode:{} Score:{}'.format(episode, score))
© www.soinside.com 2019 - 2024. All rights reserved.