如何将TensorFlow用作Azure容器实例并传递docker命令

问题描述 投票:1回答:1

我通常使用此命令运行TensorFlow Serving Docker镜像:

docker run -p 8500:8500 \
--mount type=bind,source=/mnt/docker/models,target=/models \
--mount type=bind,source=/mnt/docker/configs/models.config,target=/models/models.config \
-t tensorflow/serving \
--model_config_file=/models/models.config &
sleep 2m

我想使用az create从Azure上的Docker Hub作为容器实例部署相同的映像,并传递与上述相同的命令行参数

我已经尝试了多次,并遇到了一些错误

例如run: 1: run: docker: not found

正确的方法是什么?

azure tensorflow-serving azure-container-instances docker-run docker-command
1个回答
1
投票

我发现最好的方法是通过命令行,从您的OS终端或azure门户。 docker不能直接访问AFAIK。

如果使用门户网站,请跳过此步骤

1登录

从OS Terminal(您需要安装azure cli);

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest

az login

2创建容器

至此,您可以在门户网站和终端上使用相同的命令。

然后您必须使用azure命令创建容器

    az container create 
        -g {RESOURCE GROUP TO USE} // you'll need a resource group to contain the container group
        -n {NAME FOR CONTAINER GROUP} // this is just a name for the container group
        --image tensorflow/serving // guessing this is what you would like to mount to docker
        --ip-address public  //may need to access it externally at start
        --ports 8500 8501  // you will need to access to REST OR gRPC or both
        --cpu 1 // core count, in whole numbers
        --memory 2.5 // float in GBs
        --dns-name-label my-tf-server // to have a static address over the network

3安装驱动器

如果您需要安装一个蔚蓝的文件系统来读取模型,这些将是您所需要的

        --azure-file-volume-account-name {ACC NAME} 
        --azure-file-volume-account-key {ACC KEY} // you probably should use an env variable to make it secure
        --azure-file-volume-share-name {my-file-system} // file share name to mount
        --azure-file-volume-mount-path /models // path to mount to 

4自定义启动命令

这是在您需要输入自定义启动命令以加载自定义模型或配置文件的情况下

        --command-line "tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/models/models.config"

其余部分由azure本身处理:)以获得更多信息,您可以检查一下;

https://docs.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-start

5关于tf服务标记的更多信息

这里是可以与模型服务器的--command-line标志一起使用的标志的完整列表

usage: tensorflow_model_server
Flags:
    --port=8500                         int32   Port to listen on for gRPC API
    --grpc_socket_path=""               string  If non-empty, listen to a UNIX socket for gRPC API on the given path. Can be either relative or absolute path.
    --rest_api_port=0                   int32   Port to listen on for HTTP/REST API. If set to zero HTTP/REST API will not be exported. This port must be different than the one specified in --port.
    --rest_api_num_threads=16           int32   Number of threads for HTTP/REST API processing. If not set, will be auto set based on number of CPUs.
    --rest_api_timeout_in_ms=30000      int32   Timeout for HTTP/REST API calls.
    --enable_batching=false             bool    enable batching
    --batching_parameters_file=""       string  If non-empty, read an ascii BatchingParameters protobuf from the supplied file name and use the contained values instead of the defaults.
    --model_config_file=""              string  If non-empty, read an ascii ModelServerConfig protobuf from the supplied file name, and serve the models in that file. This config file can be used to specify multiple models to serve and other advanced parameters including non-default version policy. (If used, --model_name, --model_base_path are ignored.)
    --model_name="default"              string  name of model (ignored if --model_config_file flag is set)
    --model_base_path=""                string  path to export (ignored if --model_config_file flag is set, otherwise required)
    --max_num_load_retries=5            int32   maximum number of times it retries loading a model after the first failure, before giving up. If set to 0, a load is attempted only once. Default: 5
    --load_retry_interval_micros=60000000   int64   The interval, in microseconds, between each servable load retry. If set negative, it doesn't wait. Default: 1 minute
    --file_system_poll_wait_seconds=1   int32   Interval in seconds between each poll of the filesystem for new model version. If set to zero poll will be exactly done once and not periodically. Setting this to negative value will disable polling entirely causing ModelServer to indefinitely wait for a new model at startup. Negative values are reserved for testing purposes only.
    --flush_filesystem_caches=true      bool    If true (the default), filesystem caches will be flushed after the initial load of all servables, and after each subsequent individual servable reload (if the number of load threads is 1). This reduces memory consumption of the model server, at the potential cost of cache misses if model files are accessed after servables are loaded.
    --tensorflow_session_parallelism=0  int64   Number of threads to use for running a Tensorflow session. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
    --tensorflow_intra_op_parallelism=0 int64   Number of threads to use to parallelize the executionof an individual op. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
    --tensorflow_inter_op_parallelism=0 int64   Controls the number of operators that can be executed simultaneously. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
    --ssl_config_file=""                string  If non-empty, read an ascii SSLConfig protobuf from the supplied file name and set up a secure gRPC channel
    --platform_config_file=""           string  If non-empty, read an ascii PlatformConfigMap protobuf from the supplied file name, and use that platform config instead of the Tensorflow platform. (If used, --enable_batching is ignored.)
    --per_process_gpu_memory_fraction=0.000000  float   Fraction that each process occupies of the GPU memory space the value is between 0.0 and 1.0 (with 0.0 as the default) If 1.0, the server will allocate all the memory when the server starts, If 0.0, Tensorflow will automatically select a value.
    --saved_model_tags="serve"          string  Comma-separated set of tags corresponding to the meta graph def to load from SavedModel.
© www.soinside.com 2019 - 2024. All rights reserved.