我正在尝试使用MultiInference方法在一个请求中检查某个模型的两个不同版本(A / B测试)。但是在某些情况下,我有一个错误Duplicate evaluation of signature: classification
,在另一种情况下,我有非常奇怪的结果。
例子:
Input request:
tasks {
model_spec {
name: "stpeter"
version {
value: 7
}
signature_name: "classification"
}
method_name: "tensorflow/serving/classify"
}
tasks {
model_spec {
name: "stpeter"
version {
value: 8
}
signature_name: "classification"
}
method_name: "tensorflow/serving/classify"
}
input {
example_list {
examples {
features {
feature {
key: "inputs"
value {
bytes_list {
value: "ala.kowalska"
}
}
}
}
}
}
}
Traceback (most recent call last):
File "ab_test.py", line 146, in <module>
do_inference(args)
File "ab_test.py", line 123, in do_inference
results = stub.MultiInference(request, 10)
File "/anaconda3/envs/ents/lib/python3.6/site-packages/grpc/_channel.py", line 533, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/anaconda3/envs/ents/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Duplicate evaluation of signature: classification"
debug_error_string = "{"created":"@1549359403.703597000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"Duplicate evaluation of signature: classification","grpc_status":3}"
>
signature_name
。Input request:
tasks {
model_spec {
name: "stpeter"
version {
value: 7
}
signature_name: "classification"
}
method_name: "tensorflow/serving/classify"
}
tasks {
model_spec {
name: "stpeter"
version {
value: 8
}
}
method_name: "tensorflow/serving/classify"
}
input {
example_list {
examples {
features {
feature {
key: "inputs"
value {
bytes_list {
value: "ala.kowalska"
}
}
}
}
}
}
}
Results:
results {
model_spec {
name: "stpeter"
version {
value: 7
}
signature_name: "classification"
}
classification_result {
classifications {
classes {
label: "BOT"
score: 0.010155047290027142
}
classes {
label: "HUMAN"
score: 0.9898449182510376
}
}
}
}
results {
model_spec {
name: "stpeter"
version {
value: 7
}
signature_name: "serving_default"
}
classification_result {
classifications {
classes {
label: "BOT"
score: 0.010155047290027142
}
classes {
label: "HUMAN"
score: 0.9898449182510376
}
}
}
}
它似乎运作良好(没有错误)。但让我们仔细看看结果。我们可以在版本7中看到stpeter对两个任务的回答(signature_name =“classification”和signature_name =“serving_default”),尽管任务#2已定义:version {value: 8}
。
服务模型使用Tensorflow估算器创建,并使用qazxsw poi保存。由于我们有可用的签名:
export_savedmodel
INFO:tensorflow:Signatures INCLUDED in export for Classify: ['serving_default', 'classification']
INFO:tensorflow:Signatures INCLUDED in export for Regress: ['regression']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
中的模型版本没有限制。
我也检查了inference.proto,但似乎我的情况没有检查。
如果只是为了帮助我解决这个问题的小提示,我将非常感激。
我认为混淆来自于MultiInference请求被设计为在模型版本粒度级别上应用于单个已保存模型的事实。事实上,请求中第一个模型规范指定的模型版本是唯一重要的(参见TFS test cases)。执行相同的模型版本可能会更清晰,就像已经强制执行相同的模型名称一样here。然后上面的注释将更新为:“MultiInferenceRequest中的所有ModelSpecs必须访问相同的模型名称和版本。”
很乐意帮助解决任何其他问题。