我希望通过包含模型和示例枚举的可选输入等详细信息来增强我的 Swagger 文档。这可以通过推理模式实现吗?我还没有遇到任何例子来证明这一点。当尝试使用 input_schema 装饰器以我的请求格式向 json_schema 提供枚举和必需字段时,Swagger 似乎会根据需要考虑所有内容。
任何有关如何实现这一目标的指导将不胜感激。
要创建可选参数,您需要将
GlobalParameters
类型的 StandardPythonParameterType
赋予 input_schema
装饰器。
以下是示例
score.py
我正在使用。
创建
StandardPythonParameterType
类型的参数:
method_sample = StandardPythonParameterType("predict")
sample_global_params = StandardPythonParameterType({"method": method_sample})
然后,在装饰器中,提供如下:
@input_schema('GlobalParameters', sample_global_params, convert_to_provided_type=False)
这里,
GlobalParameters
区分大小写。
代码:
import json
import logging
import os
import pickle
import numpy as np
import pandas as pd
import joblib
import azureml.automl.core
from azureml.automl.core.shared import logging_utilities, log_server
from azureml.telemetry import INSTRUMENTATION_KEY
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType
data_sample = StandardPythonParameterType([1,2])
input_sample = StandardPythonParameterType({'data': data_sample})
method_sample = StandardPythonParameterType("predict")
sample_global_params = StandardPythonParameterType({"method": method_sample})
result_sample = NumpyParameterType(np.array(["example_value"]))
output_sample = StandardPythonParameterType({'Results':result_sample})
try:
log_server.enable_telemetry(INSTRUMENTATION_KEY)
log_server.set_verbosity('INFO')
logger = logging.getLogger('azureml.automl.core.scoring_script_v2')
except:
pass
def init():
global model
# This name is model.id of the model that we want to deploy; deserialize the model file back
# into a sklearn model
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.pkl')
path = os.path.normpath(model_path)
path_split = path.split(os.sep)
log_server.update_custom_dimensions({'model_name': path_split[-3], 'model_version': path_split[-2]})
try:
logger.info("Loading model from path.")
model = joblib.load(model_path)
logger.info("Loading successful.")
except Exception as e:
logging_utilities.log_traceback(e, logger)
raise
@input_schema('GlobalParameters', sample_global_params, convert_to_provided_type=False)
@input_schema('Inputs', input_sample)
@output_schema(output_sample)
def run(Inputs, GlobalParameters={"method": "predict"}):
data = Inputs['data']
if GlobalParameters.get("method", None) == "predict_proba":
result = ["Method proba executed",sum(data)]
elif GlobalParameters.get("method", None) == "predict":
result = ["Method predict executed",sum(data)]
else:
raise Exception(f"Invalid predict method argument received. GlobalParameters: {GlobalParameters}")
if isinstance(result, pd.DataFrame):
result = result.values
return {'Results':result.tolist()}
可以参考这个Stack解决方案