尝试在 Unity 中训练 AI 模型时安装/运行错误 - Numpy 类型错误

问题描述 投票:0回答:0

版本信息:
蟒蛇:3.10.4
ml-代理:0.30.0,
ml-agents-envs:0.30.0,
通讯器 API:1.5.0,
PyTorch:2.0.0+cu118

环境(Pip)包

    absl-py==1.4.0
    attrs==23.1.0
    cachetools==5.3.0
    cattrs==1.5.0
    certifi==2022.12.7
    charset-normalizer==3.1.0  
    cloudpickle==2.2.1
    filelock==3.12.0
    google-auth==2.17.3        
    google-auth-oauthlib==1.0.0
    grpcio==1.54.0
    gym==0.26.2
    gym-notices==0.0.8
    h5py==3.8.0
    idna==3.4
    Jinja2==3.1.2
    Markdown==3.4.3
    MarkupSafe==2.1.2
    mlagents==0.30.0
    mlagents-envs==0.30.0      
    mpmath==1.2.1
    networkx==3.0
    numpy==1.21.2
    oauthlib==3.2.2
    onnx==1.13.1
    PettingZoo==1.15.0
    Pillow==9.5.0
    protobuf==3.20.3
    pyasn1==0.5.0
    pyasn1-modules==0.3.0      
    pypiwin32==223
    pywin32==306
    PyYAML==6.0
    requests==2.29.0
    requests-oauthlib==1.3.1
    rsa==4.9
    six==1.16.0
    sympy==1.11.1
    tensorboard==2.12.2
    tensorboard-data-server==0.7.0
    tensorboard-plugin-wit==1.8.1
    torch==2.0.0+cu118
    torchaudio==2.0.1+cu118
    torchvision==0.15.1+cu118
    typing_extensions==4.4.0
    urllib3==1.26.15
    Werkzeug==2.3.0

我一直在尝试训练 mlagents 示例时遇到问题,尤其是与 numpy 相关的示例。我在虚拟环境中运行 python。我正在运行 mlagents 示例中的 3DBall 示例,并且没有更改代码中的任何内容。我似乎无法让培训工作并继续以这个错误结束(控制台输出):

    [W ..\torch\csrc\utils\tensor_numpy.cpp:84] Warning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (function operator ())

                ┐  ╖
            ╓╖╬│╡  ││╬╖╖
        ╓╖╬│││││┘  ╬│││││╬╖
     ╖╬│││││╬╜        ╙╬│││││╖╖                               ╗╗╗
     ╬╬╬╬╖││╦╖        ╖╬││╗╣╣╣╬      ╟╣╣╬    ╟╣╣╣             ╜╜╜  ╟╣╣
     ╬╬╬╬╬╬╬╬╖│╬╖╖╓╬╪│╓╣╣╣╣╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╒╣╣╖╗╣╣╣╗   ╣╣╣ ╣╣╣╣╣╣ ╟╣╣╖   ╣╣╣
     ╬╬╬╬┐  ╙╬╬╬╬│╓╣╣╣╝╜  ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╣╙ ╙╣╣╣  ╣╣╣ ╙╟╣╣╜╙  ╫╣╣  ╟╣╣
     ╬╬╬╬┐     ╙╬╬╣╣      ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣     ╣╣╣┌╣╣╜
     ╬╬╬╜       ╬╬╣╣      ╙╝╣╣╬      ╙╣╣╣╗╖╓╗╣╣╣╜ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣╦╓    ╣╣╣╣╣
     ╙   ╓╦╖    ╬╬╣╣   ╓╗╗╖            ╙╝╣╣╣╣╝╜   ╘╝╝╜   ╝╝╝  ╝╝╝   ╙╣╣╣    ╟╣╣╣
       ╩╬╬╬╬╬╬╦╦╬╬╣╣╗╣╣╣╣╣╣╣╝                                             ╫╣╣╣╣
          ╙╬╬╬╬╬╬╬╣╣╣╣╣╣╝╜
              ╙╬╬╬╣╣╣╜
                 ╙

     Version information:
      ml-agents: 0.30.0,
      ml-agents-envs: 0.30.0,
      Communicator API: 1.5.0,
      PyTorch: 2.0.0+cu118
    [W ..\torch\csrc\utils\tensor_numpy.cpp:84] Warning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (function operator ())
    [INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
    [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0
    [INFO] Connected new brain: 3DBall?team=0
    [WARNING] Deleting TensorBoard data events.out.tfevents.1682557176.DESKTOP-6DGKKSC.240.0 that was left over from a previous run.
    [INFO] Hyperparameters for behavior name 3DBall:
            trainer_type:   ppo
            hyperparameters:
              batch_size:   64
              buffer_size:  12000
              learning_rate:        0.0003
              beta: 0.001
              epsilon:      0.2
              lambd:        0.99
              num_epoch:    3
              shared_critic:        False
              learning_rate_schedule:       linear
              beta_schedule:        linear
              epsilon_schedule:     linear
            network_settings:
              normalize:    True
              hidden_units: 128
              num_layers:   2
              vis_encode_type:      simple
              memory:       None
              goal_conditioning_type:       hyper
              deterministic:        False
            reward_signals:
              extrinsic:
                gamma:      0.99
                strength:   1.0
                network_settings:
                  normalize:        False
                  hidden_units:     128
                  num_layers:       2
                  vis_encode_type:  simple
                  memory:   None
                  goal_conditioning_type:   hyper
                  deterministic:    False
            init_path:      None
            keep_checkpoints:       5
            checkpoint_interval:    500000
            max_steps:      500000
            time_horizon:   1000
            summary_freq:   12000
            threaded:       False
            self_play:      None
            behavioral_cloning:     None
    ============= Diagnostic Run torch.onnx.export version 2.0.0+cu118 =============
    verbose: False, log level: Level.ERROR
    ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

    [INFO] Exported results\yes\3DBall\3DBall-0.onnx
    [INFO] Copied results\yes\3DBall\3DBall-0.onnx to results\yes\3DBall.onnx.
    Traceback (most recent call last):
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\Scripts\mlagents-learn-script.py", line 33, in <module>
        sys.exit(load_entry_point('mlagents==0.30.0', 'console_scripts', 'mlagents-learn')())
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\learn.py", line 264, in main
        run_cli(parse_command_line())
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\learn.py", line 260, in run_cli
        run_training(run_seed, options, num_areas)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\learn.py", line 136, in run_training
        tc.start_learning(env_manager)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
        return func(*args, **kwargs)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\trainer_controller.py", line 175, in start_learning     
        n_steps = self.advance(env_manager)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
        return func(*args, **kwargs)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\trainer_controller.py", line 233, in advance
        new_step_infos = env_manager.get_steps()
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\env_manager.py", line 124, in get_steps
        new_step_infos = self._step()
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 408, in _step
        self._queue_steps()
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 302, in _queue_steps
        env_action_info = self._take_step(env_worker.previous_step)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
        return func(*args, **kwargs)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\subprocess_env_manager.py", line 543, in _take_step
        all_action_info[brain_name] = self.policies[brain_name].get_action(
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\policy\torch_policy.py", line 130, in get_action
        run_out = self.evaluate(decision_requests, global_agent_ids)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents_envs\timers.py", line 305, in wrapped
        return func(*args, **kwargs)
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\policy\torch_policy.py", line 94, in evaluate
        tensor_obs = [torch.as_tensor(np_ob) for np_ob in obs]
      File "J:\Code\CS4100\finalproject\ml-agents\my_env\lib\site-packages\mlagents\trainers\policy\torch_policy.py", line 94, in <listcomp>
        tensor_obs = [torch.as_tensor(np_ob) for np_ob in obs]
    RuntimeError: Could not infer dtype of numpy.float32

python unity3d pytorch ml-agent
© www.soinside.com 2019 - 2024. All rights reserved.