gRPC 服务器响应操作系统错误,grpc_status:14

问题描述 投票:0回答:1

使用 Tensorflow Serving 示例中的基本 gRPC 客户端从 docker 上运行的模型获取预测,我得到以下响应:

        status = StatusCode.UNAVAILABLE
        details = "OS Error"
        debug_error_string = "{"created":"@1580748231.250387313",
            "description":"Error received from peer",
            "file":"src/core/lib/surface/call.cc",
            "file_line":1017,"grpc_message":"OS Error","grpc_status":14}"

这就是我的客户目前的样子:

import grpc
import tensorflow as tf
import cv2

from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc


def main():
    data = cv2.imread('/home/matt/Downloads/cat.jpg')

    channel = grpc.insecure_channel('localhost:8500')
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)

    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'model'
    request.model_spec.signature_name = 'serving_default'

    request.inputs['image_bytes'].CopyFrom(
        tf.make_tensor_proto(data, shape=[1, data.size]))
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    print(result)

if __name__ == '__main__':
    main()

提前感谢您的帮助:)

docker tensorflow grpc tensorflow-serving grpc-python
1个回答
0
投票

为了社区的利益,在评论部分中提供解决方案。

解决方案是,在执行客户端文件之前,我们需要使用下面给出的代码运行 Docker 容器来调用

Tensorflow Model Server

docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    tensorflow/serving &

除了调用 Tensorflow 模型服务器之外,

  1. 它将把模型的本地路径与服务器上的模型路径进行映射
  2. 它将映射用于与 Tensorflow 模型服务器通信的端口。 (端口
    8500
    暴露于
    gRPC
    ,端口
    8501
    暴露于
    REST API
© www.soinside.com 2019 - 2024. All rights reserved.