断言失败:inputs.at(0).isInt32() && “对于具有动态输入的范围运算符,此版本的 TensorRT 仅支持 INT32!”

问题描述 投票:0回答:1

我正在尝试从

.engine
导出
onnx
以获得预训练的
yolov8m
模型,但遇到
trtexec
问题。 请注意,我的目标是支持
dynamic
批量大小的模型。

我按照ultralytics的官方说明获得了onnx。

from ultralytics import YOLO

# Load a model
model = YOLO('yolov8m.pt')  # load an official model
model = YOLO('path/to/best.pt')  # load a custom trained

# Export the model
model.export(format='onnx',dynamic=True) # Note the dynamic arg

我得到了相应的onnx。现在当我尝试跑步时

trtexec

trtexec --onnx=yolov8m.onnx --workspace=8144 --fp16 --minShapes=input:1x3x640x640 --optShapes=input:2x3x640x640 --maxShapes=input:10x3x640x640 --saveEngine=my.engine

我明白了

[08/10/2023-23:53:10] [I] TensorRT version: 8.2.5
[08/10/2023-23:53:11] [I] [TRT] [MemUsageChange] Init CUDA: CPU +336, GPU +0, now: CPU 348, GPU 4361 (MiB)
[08/10/2023-23:53:11] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 4361 MiB
[08/10/2023-23:53:12] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 4393 MiB
[08/10/2023-23:53:12] [I] Start parsing network model
[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------
[08/10/2023-23:53:12] [I] [TRT] Input filename:   yolov8m.onnx
[08/10/2023-23:53:12] [I] [TRT] ONNX IR version:  0.0.8
[08/10/2023-23:53:12] [I] [TRT] Opset version:    17
[08/10/2023-23:53:12] [I] [TRT] Producer name:    pytorch
[08/10/2023-23:53:12] [I] [TRT] Producer version: 2.0.1
[08/10/2023-23:53:12] [I] [TRT] Domain:           
[08/10/2023-23:53:12] [I] [TRT] Model version:    0
[08/10/2023-23:53:12] [I] [TRT] Doc string:       
[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------
[08/10/2023-23:53:12] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:773: While parsing node number 305 [Range -> "/model.22/Range_output_0"]:
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:775: input: "/model.22/Constant_8_output_0"
input: "/model.22/Cast_output_0"
input: "/model.22/Constant_9_output_0"
output: "/model.22/Range_output_0"
name: "/model.22/Range"
op_type: "Range"

 

[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3353 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"
[08/10/2023-23:53:12] [E] Failed to parse onnx file
[08/10/2023-23:53:12] [I] Finish parsing network model
[08/10/2023-23:53:12] [E] Parsing model failed
[08/10/2023-23:53:12] [E] Failed to create engine from model.

我知道有些人建议升级到最新的 TRT 版本,但我正在寻找替代解决方案。

python yolo tensorrt yolov8
1个回答
0
投票

一种替代方法是使用我的应用程序所需的最大批量大小导出静态模型(此处为 32)。

$ python3 export_yoloV8.py -w yolov8m.pt --batch 32 --simplify # note not using dynamic flag
  • 导出引擎
$ trtexec --onnx=dyolov8m-simple.onnx --workspace=8144 --fp16 --minShapes=input:1x3x640x640 --optShapes=input:2x3x640x640 --maxShapes=input:32x3x640x640 --saveEngine=yolov8m.engine
© www.soinside.com 2019 - 2024. All rights reserved.