如何在 nvidia triton 服务器中托管/调用多个模型进行推理?

问题描述 投票:0回答:0

基于此处的文档,https://github.com/aws/amazon-sagemaker-examples/blob/main/inference/nlp/realtime/triton/multi-model/bert_trition-backend/bert_pytorch_trt_backend_MME.ipynb,我有使用 gpu 实例类型和 nvidia triton 容器设置多模型。查看链接中的设置,通过传递标记而不是将文本直接传递给模型来调用模型。如果在 config.pbtxt 中将输入类型设置为字符串数据类型(下面的示例代码),是否可以将文本直接传递给模型。寻找有关此的任何示例。

config.pbtxt

name: "..."
platform: "..."
max_batch_size : 0
input [
  {
    name: "INPUT_0"
    data_type: TYPE_STRING
    ...
  }
]
output [
  {
    name: "OUTPUT_1"
    ....
  }
]

多模型调用



text_triton = "Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs."
input_ids, attention_mask = tokenize_text(text_triton)

payload = {
    "inputs": [
        {"name": "token_ids", "shape": [1, 128], "datatype": "INT32", "data": input_ids},
        {"name": "attn_mask", "shape": [1, 128], "datatype": "INT32", "data": attention_mask},
    ]
}

    response = client.invoke_endpoint(
        EndpointName=endpoint_name,
        ContentType="application/octet-stream",
        Body=json.dumps(payload),
        TargetModel=f"bert-{i}.tar.gz",
    )

machine-learning nvidia amazon-sagemaker
© www.soinside.com 2019 - 2024. All rights reserved.