如何使用容器在多个阶段和作业之间共享卷

问题描述 投票:0回答:1

对于开源项目,我使用 Azure Pipelines 并运行多个作业,每个作业都在自定义 Docker 容器内,但具有不同的环境变量。一切都很好,但我没有足够的调试数据来从日志中找出一些故障的根本原因。因此,我想在作业失败时发布一个工件(核心转储、支持包等)。虽然使用两个阶段来发布条件工件似乎很容易,但困难的部分是使用卷将工件从作业容器内部复制到主机虚拟机。我通过阅读文档尝试了许多替代方案,但无法使其工作,并且无法在网上找到我正在寻找的示例。这可能是不可能的,但我希望它是。 让我展示一个简化的 YAML 文件,以确保我们都在同一页面上:

# This "resources" entry doesn't help, but I tried it. # # resources: # containers: # - container: artifacts-container # image: ubuntu:latest # volumes: # - /mnt/artifacts # - /mnt/artifacts:/mnt/artifacts # tried also this way! stages: - stage: Test jobs: - job: A container: 'my-ubuntu-based-docker-container' services: # new stuff, it doesn't work as expected artifacts-service: # image: ubuntu:latest # volumes: # - /mnt/artifacts # # I tried this approach with the "resources" above and still doesn't work # # services: # artifacts-container: artifacts-container pool: vmImage: 'ubuntu-20.04' variables: VAR1: blah steps: - script: /x/y/z displayName: run Z - script: touch /mnt/artifacts/hello-$SYSTEM_JOBIDENTIFIER displayName: create artifact - job: B container: 'my-ubuntu-based-docker-container' services: # new stuff, it doesn't work as expected artifacts-service: # image: ubuntu:latest # volumes: # - /mnt/artifacts # # I tried this approach with the "resources" above and still doesn't work # # services: # artifacts-container: artifacts-container pool: vmImage: 'ubuntu-20.04' variables: VAR1: blah-blah steps: - script: /x/y/z displayName: run Z - script: touch /mnt/artifacts/hello-$SYSTEM_JOBIDENTIFIER displayName: create artifact - stage: PublishArtifacts dependsOn: Test condition: always() # not(succeeded()) jobs: - job: PublishCoreDumps services: artifacts-service: image: ubuntu:latest volumes: - /mnt/artifacts # I tried this approach with the "resources" above and still doesn't work # # services: # artifacts-container: artifacts-container # steps: - task: PublishBuildArtifacts@1 inputs: targetPath: /mnt/artifacts artifact: test-results publishLocation: pipeline

注意:我知道评论并不经常起作用。已经被那件事烧伤了。我将它们添加到此处,只是为了了解更多背景信息。我还对不起作用的“资源”进行了额外的实验。我得到的失败是这样的:

touch: cannot touch '/mnt/artifacts/hello-Test.A.__default': No such file or directory

我将其解释为:
/mnt/artifacts

不存在,因为它尚未安装。因此,我们显然无法在那里创建文件。同时,我需要的是一种从容器内、多个作业写入文件的方法,并在失败时发布它们。

这对我来说似乎是很自然的功能,但

显然

人们不会那样使用 Azure Pipelines。可能我错过了不止一件事。 也许我需要在容器内安装一些东西?如果有帮助的话,我很乐意这样做。

azure-pipelines
1个回答
0
投票
Volumes

对于在

services
之间共享数据或在作业的多次运行之间保留数据非常有用。但是,您使用的是 Microsoft 托管代理
Ubuntu-20.04
,作业之间的卷将
not be persisted
,因为作业完成后会清理主机。
您可以参考

doc

了解更多详情。

编辑:

当您使用

container job

时,执行步骤

inside the container
而不是代理机。

请确保您的容器符合

要求

使用容器作业,您可以直接将文件(coredump、支持包等)发布为工件,然后在下一阶段下载。

示例 yaml:

stages: - stage: Test jobs: - job: A container: ubuntu:20.04 pool: vmImage: 'ubuntu-20.04' variables: VAR1: blah steps: - script: | hostname # to make sure the container by hostname echo test >> test1.txt - task: PublishPipelineArtifact@1 inputs: targetPath: 'test1.txt' artifact: 'test' publishLocation: 'pipeline' - job: B container: ubuntu:20.04 pool: vmImage: 'ubuntu-20.04' variables: VAR1: blah steps: - script: | hostname # to make sure the container by hostname echo test2 >> test2.txt - task: PublishPipelineArtifact@1 inputs: targetPath: 'test2.txt' artifact: 'test2' publishLocation: 'pipeline' - stage: PublishArtifacts dependsOn: Test condition: always() # not(succeeded()) jobs: - job: PublishCoreDumps steps: - task: DownloadPipelineArtifact@2 inputs: buildType: 'current' artifactName: 'test' targetPath: '$(Pipeline.Workspace)/s' - task: DownloadPipelineArtifact@2 inputs: buildType: 'current' artifactName: 'test2' targetPath: '$(Pipeline.Workspace)/s' - script: ls -al # ls the files to check the artifact from stage Test.

最后阶段的
ls

命令结果如下,它包含了

Test
阶段发布的文件。

© www.soinside.com 2019 - 2024. All rights reserved.