将 Tensorflow 模型转换为 Tensorflow Lite Xcode 错误

问题描述 投票:0回答:1

我是 Xcode 和 TensorFlow 新手,但我有 TensorFlow 模型(没有信息,只有模型),我应该将其转换为 Tensorflow Lite 并在示例 iOS 应用程序中使用它(使用 Xcode)。

我遵循了一些教程并使用了这种方法:

# Load the model
_PATH_TO_TF_ORIG_MODEL = "models/yolov4-tiny_tensorflow_saved"
_MODEL_OUTPUT_PATH = "yolov4-tiny_tensorflow.tflite"
_MODEL_OUTPUT_METADATA_PATH = "yolov4-tiny_tensorflow_metadata.tflite"


converter = tf.lite.TFLiteConverter.from_saved_model(_PATH_TO_TF_ORIG_MODEL)

# Set the optimization strategy (optional)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

# Enable Flex Ops
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,  # Enable TensorFlow Lite ops.
    tf.lite.OpsSet.SELECT_TF_OPS     # Enable TensorFlow ops (Flex Ops).
]


tflite_model = converter.convert()

# Save the model
with open(_MODEL_OUTPUT_PATH, 'wb') as f:
    f.write(tflite_model)

然后当我测试lite模型时:

interpreter = tf.lite.Interpreter(model_path=_MODEL_OUTPUT_PATH)
interpreter.allocate_tensors()

# Get model input and output details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Read and preprocess the image
image = cv2.imread(_PATH_TO_TEST_IMAGE)
input_shape = input_details[0]['shape']
image_resized = cv2.resize(image, (input_shape[1], input_shape[2]))
input_data = np.expand_dims(image_resized, axis=0).astype(np.float32)

# Normalize the input if your model expects normalization
input_data = input_data / 255

# Set the tensor to point to the input data to be inferred
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

# Retrieve the model output
output_data = interpreter.get_tensor(output_details[0]['index'])

# Get the output details
height, width, _ = image.shape

confidence_threshold = 0.0

# Class labels mapping
class_labels = {i: f"Class {i}" for i in range(79)}

# Drawing detections on the image
for i in range(output_data.shape[1]):  # Iterate over all detections
    detection = output_data[0, i, :]
    bbox = detection[:4]  
    score = detection[4]  
    class_probabilities = detection[5:]  
    
    if score < confidence_threshold:
        continue  # Skip low-confidence detections

    class_id = np.argmax(class_probabilities)
    class_probability = class_probabilities[class_id]
    
    # Convert from normalized to pixel coordinates
    x_min = int(bbox[1] * width)
    y_min = int(bbox[0] * height)
    x_max = int(bbox[3] * width)
    y_max = int(bbox[2] * height)

    # Draw bounding box and label
    cv2.rectangle(image, (x_min, y_min), (x_max, y_max), (0, 255, 0), 2)
    label = f"{class_labels.get(class_id, 'Unknown')}: {class_probability:.2f}"
    cv2.putText(image, label, (x_min, y_min - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

# Display image with matplotlib to avoid issues in Jupyter Notebook
plt.figure(figsize=(10, 10))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))  # Convert BGR to RGB for correct display
plt.show()

一切正常,输出看起来像这样(我不关心性能,这不是我的模型,我只是在进行转换):

但是当我将模型上传到我的 Xcode 项目时,我得到了

Failed to create the interpreter with error: NOT_FOUND: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.

我什至尝试在将数据传递到 Xcode 中的模型之前手动标准化数据,遵循我在笔记本中本地尝试的标准化(上面的代码),但我总是遇到相同的错误。

我还尝试手动添加元数据并保存。然后使用这个模型。但结果是一样的:

import tensorflow as tf

ImageClassifierWriter = image_classifier.MetadataWriter

INPUT_IMAGE_SHAPE = [1, 416, 416, 3] 
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5

# Load the TFLite model
tflite_model_path = _MODEL_OUTPUT_PATH
tflite_model = tf.io.gfile.GFile(tflite_model_path, 'rb').read()

# Create the metadata writer and populate
model_meta = ImageClassifierWriter.create_for_inference(
    tflite_model,
    input_norm_mean=[_INPUT_NORM_MEAN],
    input_norm_std=[_INPUT_NORM_STD],
    label_file_paths=["classes.names"]
)

writer_utils.save_file(model_meta.populate(), _MODEL_OUTPUT_METADATA_PATH)

知道我错过了什么吗?

python xcode tensorflow tensorflow-lite
1个回答
0
投票

基本建议:不要忘记“导入 TensorFlowLite”,因为您正在使用 TensorFlowLite

© www.soinside.com 2019 - 2024. All rights reserved.