Sagemaker:指定自定义入口点会导致找不到错误

问题描述 投票:0回答:1

我正在尝试将使用tensorflow训练的对象检测模型部署到sagemaker。我能够在模型创建期间部署它而无需指定任何入口点,但是事实证明,这样做仅适用于小尺寸图像(Sagemaker的限制为5MB)。我用于此的代码为:

from sagemaker.tensorflow.serving import Model

# Initialize model ...
model = Model(
    model_data= s3_path_for_model,
    role=sagemaker_role,
    framework_version="1.14",
    env=env)

# Deploy model ...
predictor = model.deploy(initial_instance_count=1,
                         instance_type='ml.t2.medium')


# Test using an image ...
import cv2
import numpy as np

image_content = cv2.imread("PATH_TO_IMAGE",
                           1).astype('uint8').tolist()
body = {"instances": [{"inputs": image_content}]}

# Works fine for small images ...
# I get predictions perfectly with this ...
results = predictor.predict(body)

所以,我四处搜寻,发现我需要为entry_point传递一个Model()以便预测更大的图像。类似于:

model = Model(
        entry_point="inference.py",
        dependencies=["requirements.txt"],
        model_data=  s3_path_for_model,
        role=sagemaker_role,
        framework_version="1.14",
        env=env
)

但是这样做会导致FileNotFoundError:[Errno 2]没有这样的文件或目录:'inference.py'。请在这里提供一些帮助。我正在使用sagemaker-python-sdk。我的文件夹结构为:

model
    |__ 001
          |__saved_model.pb
          |__variables
                |__<contents here>

    |__ code
          |__inference.py
          |__requirements.txt

注意:我也尝试过../code/inference.py和/code/inference.py。

python amazon-web-services tensorflow tensorflow-serving amazon-sagemaker
1个回答
0
投票

5MB是实时端点的硬限制。

© www.soinside.com 2019 - 2024. All rights reserved.