从onnx模型的任意层提取输出张量

问题描述 投票:0回答:2

我想在图像推理过程中提取onnx模型(例如squeezenet.onnx等)不同层的输出。我正在尝试使用[如何从模型的任意层中提取输出张量][1]中的代码:

    # add all intermediate outputs to onnx net
    ort_session = ort.InferenceSession('<you path>/model.onnx')
    org_outputs = [x.name for x in ort_session.get_outputs()]
    
    model = onnx.load('<you path>/model.onnx')
    for node in model.graph.node:
        for output in node.output:
            if output not in org_outputs:
                model.graph.output.extend([onnx.ValueInfoProto(name=output)])
    
    # excute onnx
    ort_session = ort.InferenceSession(model.SerializeToString())
    outputs = [x.name for x in ort_session.get_outputs()]
    img_path = '<you path>/input_img.raw'
    img = get_image(img_path, show=True)
    transform_fn = transforms.Compose([
       transforms.Resize(224),
       transforms.ToTensor(),
    ])
    img = transform_fn(img)
    img = img.expand_dims(axis=0)
    ort_outs = ort_session.run(outputs, {'data': img} )
    ort_outs = OrderedDict(zip(outputs, ort_outs))

尽管我设法获得了所需的输入大小,但仍收到以下错误:

---> 40 ort_outs = ort_session.run(outputs, {'data': img} )
     41 ort_outs = OrderedDict(zip(outputs, ort_outs))

/usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
    198             output_names = [output.name for output in self._outputs_meta]
    199         try:
--> 200             return self._sess.run(output_names, input_feed, run_options)
    201         except C.EPFail as err:
    202             if self._enable_fallback:

RuntimeError: Input must be a list of dictionaries or a single numpy array for input 'data'.

我该如何解决这个问题?感谢你的帮助!谢谢 [1]:https://github.com/microsoft/onnxruntime/issues/1455

python tensorflow image-processing conv-neural-network onnx
2个回答
0
投票

不幸的是这是不可能的。但是,您可以将原始模型从 PyTorch 重新导出到 onnx,并将所需层的输出添加到模型的前向方法的返回语句中。 (您可能需要通过几种方法来提供它,直到模型中的第一个前向方法)


-2
投票

我在以下链接下找到了一个很好的实现:https://github.com/microsoft/onnxruntime/issues/1455#issuecomment-979901463
我已根据我的目的对其进行了调整,并且成功了。

# add all intermediate outputs to onnx net
ort_session = ort.InferenceSession('<you path>/model.onnx')
org_outputs = [x.name for x in ort_session.get_outputs()]

model = onnx.load('<you path>/model.onnx')
for node in model.graph.node:
    for output in node.output:
        if output not in org_outputs:
            model.graph.output.extend([onnx.ValueInfoProto(name=output)])

# excute onnx
ort_session = ort.InferenceSession(model.SerializeToString())
outputs = [x.name for x in ort_session.get_outputs()]
in_img = np.fromfile('<you path>/input_img.raw', dtype=np.float32).reshape(1,3,511,511)
ort_outs = ort_session.run(outputs, {'actual_input_1': in_img} )
ort_outs = OrderedDict(zip(outputs, ort_outs))
© www.soinside.com 2019 - 2024. All rights reserved.