我已经在 SageMaker 之外训练了一个 TensorFlow 模型。
我试图专注于部署/推理,但我面临推理问题。
对于部署,我这样做了:
from sagemaker.tensorflow.serving import TensorFlowModel
instance_type = 'ml.c5.xlarge'
model = TensorFlowModel(
model_data=model_data,
name= 'tfmodel1',
framework_version="2.2",
role=role,
source_dir='code',
)
predictor = model.deploy(endpoint_name='test',
initial_instance_count=1,
tags=tags,
instance_type=instance_type)
当我尝试推断模型时,我这样做了:
import PIL
from PIL import Image
import numpy as np
import json
import boto3
image = PIL.Image.open('img_test.jpg')
client = boto3.client('sagemaker-runtime')
batch_size = 1
image = np.asarray(image.resize((512, 512)))
image = np.concatenate([image[np.newaxis, :, :]] * batch_size)
body = json.dumps({"instances": image.tolist()})
ioc_predictor_endpoint_name = "test"
content_type = 'application/x-image'
ioc_response = client.invoke_endpoint(
EndpointName=ioc_predictor_endpoint_name,
Body=body,
ContentType=content_type
)
但是我有这个错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/x-image"}".
我也尝试过:
from sagemaker.predictor import Predictor
predictor = Predictor(ioc_predictor_endpoint_name)
inference_response = predictor.predict(data=body)
print(inference_response)
并出现此错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/octet-stream"}".
我能做什么?不知道是不是我漏掉了什么
您在本地测试过这个模型吗?推理如何在本地与您的 TF 模型一起工作?这应该向您展示需要如何格式化输入以便特定于该模型进行推理。应用程序/x-image 数据格式应该没问题。您有自定义推理脚本吗?在此处查看此链接,添加推理脚本,让您控制前/后处理,并且您可以记录每一行以捕获错误:https://github.com/aws/sagemaker-tensorflow-serving-container。
您解决了错误吗?我有同样的错误。你能帮我吗?