TF-如何正确设置模型签名以便与Docker一起使用?

问题描述 投票:0回答:1

我正在尝试了解如何设置用于与docker服务的TF模型。我负责安装docker,我知道如何将训练有素的模型导出为.pb。我不了解的是如何正确定义服务的模型签名。我只想在码头上使用docker调用经过训练的模型。您能否在以下示例中向我解释我需要更改的内容?

我正在执行以下步骤:

1)创建目录/tmp/serving_minimal,在终端$cd /tmp/serving_minimal中cd到该目录>

2)将以下代码的文件/tmp/serving_minimal保存在generate_model.py

import numpy as np
import tensorflow as tf
import os, shutil

#%% Data

# Input (2D)
x = np.array([[x1,x2] for x1 in np.linspace(10,20,4) for x2 in np.linspace(-7,-3,3)])

# Output (3D)
f = np.array([[np.sin(np.sum(xx)),np.cos(np.sum(xx)),np.cos(np.sum(xx))**2] for xx in x])

#%% Model

print('**********************************************')
print('TF - save')

# Dimension of input x and output f
d_x = x.shape[-1]
d_f = f.shape[-1]

# Placeholders
x_p = tf.placeholder(tf.float64,[None,d_x],'my_x_p')
f_p = tf.placeholder(tf.float64,[None,d_f],'my_f_p')

# Model
model = x_p
model = tf.layers.dense(model,7,tf.tanh)
model = tf.layers.dense(model,5,tf.tanh)
model = tf.layers.dense(model,d_f,None)
model = tf.identity(model,'my_model')

# Session
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Evaluate for later check of serving
f_model = sess.run(model,{x_p:x})
folder = 'data'
if not os.path.exists(folder):
    os.mkdir(folder)
np.savetxt('data/x.dat',f_model)
np.savetxt('data/f_model.dat',f_model)

# Save model
folder = 'saved/model/001'
if os.path.exists(folder):
    shutil.rmtree(folder)
    print('Old model deleted')
saver = tf.saved_model.builder.SavedModelBuilder(folder)
############################################
# HOW DO I SET UP THE SIGNATURE CORRECTLY?
############################################
info_input = tf.saved_model.utils.build_tensor_info(x_p)
info_output = tf.saved_model.utils.build_tensor_info(model)
signature = tf.saved_model.signature_def_utils.build_signature_def(
        inputs={'x':info_input}
        ,outputs={'f':info_output}
        ,method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
        )
saver.add_meta_graph_and_variables(
        sess
        ,[tf.saved_model.tag_constants.SERVING]
        ,signature_def_map={'predict':signature}
        ####################################################################
        ### WHAT DO I NEED TO PUT HERE IN ORDER TO CALL THE MODEL LATER ON 
        ### WHILE SERVING WITH DOCKER AND HOW DO I CALL IT IN DOCKER??
        ####################################################################
        )
saver.save()

# Close and clean up
sess.close()
tf.reset_default_graph()

#%% Load in Python and check

print('**********************************************')
print('TF - load in Python')

# Session
sess = tf.Session()

# Load
tf.saved_model.loader.load(
        sess
        ,[tf.saved_model.tag_constants.SERVING]
        ,folder
        )

# Extract operations from graph
graph = tf.get_default_graph()
x_p = graph.get_tensor_by_name('my_x_p:0')
f_p = graph.get_tensor_by_name('my_f_p:0')
model = graph.get_tensor_by_name('my_model:0')

# Evaluate model
f_model2 = sess.run(model,{x_p:x})
print(f_model - f_model2)

# Close and clean up
sess.close()
tf.reset_default_graph()

4)在终端$python generate_model.py中运行脚本(导出模型并将其加载到Python中进行检查)

5)在终端$sudo docker ps中启动docker>

6)在docker中运行模型

$ sudo docker run \
    -p 8501:8501 \
    --name my_container \
    --mount type=bind,source=/tmp/serving_minimal/saved/model,target=/models/model1 \
    -e MODEL_NAME=model1 \
    -t tensorflow/serving &

7)检查模型是否处于活动状态

$ sudo docker ps

8)[错误]尝试评估活动模型

$ curl -d '{"x": [[1.0,2.0],[10.0,20.0]]}' -X POST http://localhost:8501/v1/models/model1:predict

8岁时出现错误

{ "error": "Serving signature name: \"serving_default\" not found in signature def" }

但是我不太了解TF签名定义命令,以便知道该怎么做。您能告诉我什么需要纠正吗?谢谢!

我正在尝试了解如何设置用于与docker服务的TF模型。我负责安装docker,我知道如何将训练有素的模型导出为.pb。我不明白的是如何正确定义...

python docker tensorflow tensorflow-serving
1个回答
0
投票

请参阅您的培训代码中的这一行

signature_def_map={'predict':signature}
© www.soinside.com 2019 - 2024. All rights reserved.