Coral Dev Board上的TensorFlow Lite模型未在TPU上运行

问题描述 投票:1回答:1

我有一个TensorFlow Lite模型和一个Coral开发板,我想在开发板的TPU上进行推断。

[在我的Python推理脚本中初始化TensorFlow Lite解释器时,我按照the Google Coral TFLite Python example中的示例(链接到getting started guide for the Coral Dev Board中的示例)添加了“ libedgetpu.so.1”作为实验性委托,但是推理正是速度与我未指定TPU实验代表的速度相同,因此我假设推理仍在开发板的CPU上进行。开发板上(有和没有实验代表)的推理时间为32秒;在台式机上,如果在CPU上运行TFLite模型,则相同测试集的推理时间为10s,如果在转换为TFLite之前在Keras中运行相同模型,则推理时间为1.3s(我认为这比TFLite快,因为它利用了多核)。

我的问题:如何在开发板的TPU而不是CPU上进行推理?

[我想知道这是否是我在转换为TFLite格式(使用with tf.device上下文管理器的EG或使最终的TFLite模型使用TPU的东西)上在PC上构建Keras模型时需要指定的东西,但是我可以在TensorFlow Lite Converter Python API documentation中什么都看不到。

开发板正在运行Mendel版本2.0,Python版本3.5.3,tflite运行时版本2.1.0.post1(我知道我应该更新Mendel版本,但是我当前使用的是Windows PC,它将要访问Linux机器,或尝试使用Putty,VirtualBox或WSL从Windows更新开发板,就很痛苦。

下面是我的推理脚本(如果需要,我还可以上传训练脚本和模型;数据集是MNIST,如this Gist中所述转换为NumPy浮点数据:]

import numpy as np
from time import perf_counter
try:
    # Try importing the small tflite_runtime module (this runs on the Dev Board)
    print("Trying to import tensorflow lite runtime...")
    from tflite_runtime.interpreter import Interpreter, load_delegate
    experimental_delegates=[load_delegate('libedgetpu.so.1.0')]
except ModuleNotFoundError:
    # Try importing the full tensorflow module (this runs on PC)
    try:
        print("TFLite runtime not found; trying to import full tensorflow...")
        import tensorflow as tf
        Interpreter = tf.lite.Interpreter
        experimental_delegates = None
    except ModuleNotFoundError:
        # Couldn't import either module
        raise RuntimeError("Could not import Tensorflow or Tensorflow Lite")

# Load data
mnist_file = np.load("data/mnist.npz")
x_test = mnist_file["x_test"]
y_test = mnist_file["y_test"]
x_test = x_test.astype(np.float32)

# Initialise the interpreter
tfl_filename = "lstm_mnist_model_b10000.tflite"
interpreter = Interpreter(model_path=tfl_filename,
    experimental_delegates=experimental_delegates)
interpreter.allocate_tensors()

print("Starting evaluation...")
for _ in range(3):
    input_index = (interpreter.get_input_details()[0]['index'])
    output_index = (interpreter.get_output_details()[0]['index'])
    # Perform inference
    t0 = perf_counter()
    interpreter.set_tensor(input_index, x_test)
    interpreter.invoke()
    result = interpreter.get_tensor(output_index)
    t1 = perf_counter()
    # Print accuracy and speed
    num_correct = (result.argmax(axis=1) == y_test).sum()
    print("Time taken (TFLite) = {:.4f} s".format(t1 - t0))
    print('TensorFlow Lite Evaluation accuracy = {} %'.format(
        100 * num_correct / len(x_test)))
    # Reset interpreter state (I don't know why this should be necessary, but
    # accuracy suffers without it)
    interpreter.reset_all_variables()

我有一个TensorFlow Lite模型和一个Coral开发板,我想在开发板的TPU上进行推断。在我的Python推理脚本中初始化TensorFlow Lite解释器时,我添加了“ ...

machine-learning tensorflow2.0 tensorflow-lite tpu google-coral
1个回答
0
投票

好像您已经在我们的github页面上问了这个问题,并且是answered here。只想分享给他人参考

© www.soinside.com 2019 - 2024. All rights reserved.