在azure ml部署环境中导入自定义python模块

问题描述 投票:0回答:4

我有一个 sklearn k-means 模型。我正在训练模型并将其保存在 pickle 文件中,以便稍后可以使用 azure ml 库进行部署。我正在训练的模型使用名为 MultiColumnLabelEncoder 的自定义特征编码器。 管道模型定义如下:

# Pipeline
kmeans = KMeans(n_clusters=3, random_state=0)
pipe = Pipeline([
("encoder", MultiColumnLabelEncoder()),
('k-means', kmeans),
])
#Training the pipeline
model = pipe.fit(visitors_df)
prediction = model.predict(visitors_df)
#save the model in pickle/joblib format
filename = 'k_means_model.pkl'
joblib.dump(model, filename)

模型保存效果很好。部署步骤与此链接中的步骤相同:

https://notebooks.azure.com/azureml/projects/azureml-getting-started/html/how-to-use-azureml/deploy-to-cloud/model-register-and-deploy.ipynb

但是部署总是失败并出现此错误:

  File "/var/azureml-server/create_app.py", line 3, in <module>
    from app import main
  File "/var/azureml-server/app.py", line 27, in <module>
    import main as user_main
  File "/var/azureml-app/main.py", line 19, in <module>
    driver_module_spec.loader.exec_module(driver_module)
  File "/structure/azureml-app/score.py", line 22, in <module>
    importlib.import_module("multilabelencoder")
  File "/azureml-envs/azureml_b707e8c15a41fd316cf6c660941cf3d5/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'multilabelencoder'

我知道 pickle/joblib 在取消自定义函数 MultiLabelEncoder 时存在一些问题。这就是为什么我在一个单独的 python 脚本中定义这个类(我也执行了该脚本)。我在训练 python 脚本、部署脚本和评分 python 文件 (score.py) 中调用了这个自定义函数。 Score.py 文件中导入不成功。 所以我的问题是如何将自定义 python 模块导入到 azure ml 部署环境?

提前谢谢您。

编辑: 这是我的 .yml 文件

name: project_environment
dependencies:
  # The python interpreter version.
  # Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2

- pip:
  - multilabelencoder==1.0.4
  - scikit-learn
  - azureml-defaults==1.0.74.*
  - pandas
channels:
- conda-forge
python pickle azure-machine-learning-service
4个回答
4
投票

事实上,解决方案是将我自定义的类MultiColumnLabelEncoder作为pip包导入(可以通过pip install multilllabelencoder==1.0.5找到它)。 然后我将 pip 包传递到 .yml 文件或 azure ml 环境的 InferenceConfig 中。 在score.py文件中,我导入了该类,如下所示:

from multilabelencoder import multilabelencoder
def init():
    global model

    # Call the custom encoder to be used dfor unpickling the model
    encoder = multilabelencoder.MultiColumnLabelEncoder() 
    # Get the path where the deployed model can be found.
    model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'k_means_model_45.pkl')
    model = joblib.load(model_path)

然后部署成功。 更重要的一件事是我必须在训练管道中使用相同的 pip 包(multilabelencoder),如下所示:

from multilabelencoder import multilabelencoder 
pipe = Pipeline([
    ("encoder", multilabelencoder.MultiColumnLabelEncoder(columns)),
    ('k-means', kmeans),
])
#Training the pipeline
trainedModel = pipe.fit(df)

4
投票

我面临着同样的问题,尝试部署一个依赖于我自己的一些脚本的模型并收到错误消息:

 ModuleNotFoundError: No module named 'my-own-module-name'

MS文档中找到了这个“私人轮文件”解决方案并且它有效。与上述解决方案的区别在于,现在我不需要将脚本发布到 pip。我想很多人可能会面临同样的情况,由于某种原因你不能或不想发布你的脚本。相反,您自己的 Wheel 文件保存在您自己的 Blob 存储下。

按照文档,我执行了以下步骤,它对我有用。现在我可以部署依赖于我自己的脚本的模型。

  1. 将模型所依赖的自己的脚本打包成wheel文件,wheel文件保存在本地。

    “your_path/your-wheel-file-name.whl”

  2. 按照MS 文档中的“私钥轮文件”解决方案中的说明进行操作。下面是对我有用的代码。


from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies

whl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = "your_pathpath/your-wheel-file-name.whl")

myenv = CondaDependencies()
myenv.add_pip_package("scikit-learn==0.22.1")
myenv.add_pip_package("azureml-defaults")
myenv.add_pip_package(whl_url)

with open("myenv.yml","w") as f:
    f.write(myenv.serialize_to_string())

我的环境文件现在看起来像:

name: project_environment
dependencies:
  # The python interpreter version.

  # Currently Azure ML only supports 3.5.2 and later.

- python=3.6.2

- pip:
  - scikit-learn==0.22.1
  - azureml-defaults
  - https://myworkspaceid.blob.core/azureml/Environment/azureml-private-packages/my-wheel-file-name.whl
channels:
- conda-forge

我是 Azure ml 的新手。在实践中学习并与社区交流。这个解决方案对我来说效果很好,希望有帮助。


0
投票

对我有用的另一种方法是注册一个包含 pickled 模型和自定义模块的“model_src”目录,而不是仅注册 pickled 模型。然后,在部署期间在评分脚本中指定自定义模块,例如使用 python 的 os 模块。下面使用 sdk-v1 的示例:

“model_src”目录示例

model_src
   │
   ├─ utils   # your custom module
   │    └─ multilabelencoder.py
   │
   └─ models  
        ├─ score.py
        └─ k_means_model_45.pkl  # your pickled model file

在sdk-v1中注册“model_src”

model = Model.register(model_path="./model_src",
    model_name="kmeans",                          
    description="model registered as a directory",
    workspace=ws
)

相应地,在定义推理配置时

deployment_folder = './model_src'
script_file = 'models/score.py'
service_env = Environment.from_conda_specification("kmeans-service",
    './environment.yml'  # wherever yml is located locally
)
inference_config = InferenceConfig(source_directory=deployment_folder,
    entry_script=script_file,
    environment=service_env
)

评分脚本的内容,例如score.py

# Specify model_src as your parent
import os
deploy_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'),'model_src')

# Import custom module
import sys
sys.path.append("{0}/utils".format(deploy_dir)) 
from multilabelencoder import MultiColumnLabelEncoder

import joblib

def init():
    global model

    # Call the custom encoder to be used dfor unpickling the model
    encoder = MultiColumnLabelEncoder()  # Use as intended downstream 
    
    # Get the path where the deployed model can be found.
    model = joblib.load('{}/models/k_means_model_45.pkl'.format(deploy_dir))

此方法提供了在我的评分脚本中导入各种自定义脚本的灵活性。


0
投票

尝试

# Additional imports
from azureml.core.environment import CondaDependencies

# Modify your YAML file to include the private package
yaml_content = """
name: model-env
channels:
  - conda-forge
dependencies:
  - python=3.10.11
  - numpy
  - pip
  - scikit-learn
  - scipy
  - pandas
  
  - pip:
    - azureml-defaults
    - tempfile2
    - xlrd
    - mlflow
    - azureml-mlflow
"""

# Create CondaDependencies object from YAML
conda_dep = CondaDependencies(conda_dependencies_file_content=yaml_content)

# Register the private wheel file
private_wheel_path = "path_to_your_private_wheel_file.whl"
experiment_env.add_private_pip_wheel(private_wheel_path)

# Register the environment
experiment_env.register(workspace=workspace)

# Fetch the registered environment
registered_env = Environment.get(workspace, 'experiment_env')

# Create a new runconfig object for the pipeline
pipeline_run_config = RunConfiguration()

# Use the config
pipeline_run_config.target = pipeline_cluster

# Assign env to compute
pipeline_run_config.environment = registered_env
© www.soinside.com 2019 - 2024. All rights reserved.