在Tensorflow中使用docker服务时遇到的问题。

问题描述 投票:0回答:1

我正在使用Tensorflow与docker服务。我使用了TensorFlow官方文档中提到的以下代码(使用Windows PowerShell)。

docker pull tensorflow/serving
git clone https://github.com/tensorflow/serving
Set-Variable -Name "TESTDATA" -Value "$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving

在运行上述代码后,我得到了这个

2020-05-08 04:50:41.577978: I tensorflow_serving/model_servers/server.cc:86] Building single TensorFlow model file config:  model_name: half_plus_two model_base_path: /models/half_plus_two
2020-05-08 04:50:41.581575: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2020-05-08 04:50:41.581678: I tensorflow_serving/model_servers/server_core.cc:573]  (Re-)adding model: half_plus_two
2020-05-08 04:50:41.780628: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: half_plus_two version: 123}
2020-05-08 04:50:41.780738: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.780778: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.781020: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/half_plus_two/00000123
2020-05-08 04:50:41.793200: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-05-08 04:50:41.793300: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:264] Reading SavedModel debug info (if present) from: /models/half_plus_two/00000123
2020-05-08 04:50:41.797324: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-08 04:50:41.844706: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:203] Restoring SavedModel bundle.
2020-05-08 04:50:41.881278: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:152] Running initialization op on SavedModel bundle at path: /models/half_plus_two/00000123
2020-05-08 04:50:41.887881: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:333] SavedModel load for tags { serve }; Status: success: OK. Took 106866 microseconds.
2020-05-08 04:50:41.889403: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:105] No warmup data file found at /models/half_plus_two/00000123/assets.extra/tf_serving_warmup_requests
2020-05-08 04:50:41.895569: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: half_plus_two version: 123}
2020-05-08 04:50:41.901866: I tensorflow_serving/model_servers/server.cc:358] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
[evhttp_server.cc : 238] NET_LOG: Entering the event loop ...
2020-05-08 04:50:41.907795: I tensorflow_serving/model_servers/server.cc:378] Exporting HTTP/REST API at:localhost:8501 ...

等了一个小时才运行下一个命令.你们说我该怎么办?任何想法,请帮助

docker tensorflow deep-learning tensorflow-serving
1个回答
0
投票

我相信,现在你的服务器已经运行正常了。只要打开一个新窗口,你就可以向它发出http请求。我参考了文档,你所做的是正确的,也是你所希望的日志性质。

tensorflow程序

按照文档中接下来的步骤进行操作即可。

# Query the model using the predict API
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/models/half_plus_two:predict
# Returns => { "predictions": [2.5, 3.0, 4.5] }
© www.soinside.com 2019 - 2024. All rights reserved.