我的 k3s yaml 文件中的标签和规格错误

问题描述 投票:0回答:1

我正在尝试在 k3s 中部署映像,但收到这样的错误。我已经确保没有语法错误。我还在我的规范中添加了匹配标签,但不知道是什么导致了问题

spec.selector: Required value
spec.template.metadata.labels: Invalid value: map[string]string{...} selector does not match template labels 

这是我的 yaml 文件

--- 
apiVersion: apps/v1
kind: Deployment
metadata: 
  labels: 
    app: darwin_tritron_server
  name: darwin_tritron_server
  spec: 
    replicas: 3
    selector: 
      matchLabels: 
        app: darwin_tritron_server
    template: 
      metadata: 
        labels: 
          app: darwin_tritron_server
      spec: 
        containers: 
          - 
            args: 
              - "cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false"
            command: 
              - /bin/sh
              - "-c"
            image: "nvcr.io/nvidia/tritonserver:20.12-py3"
            name: flower
            ports: 
              - 
                containerPort: 8000
                name: http-triton
              - 
                containerPort: 8001
                name: grpc-triton
              - 
                containerPort: 8002
                name: metrics-triton
            resources: 
              limits: 
                nvidia.com/mig-1g.5gb: 1
            volumeMounts: 
              - 
                mountPath: /models
                name: models
        volumes: 
          - 
            name: models
            nfs: 
              path: <path/to/flowerdemo/model/files>
              readOnly: false
              server: "<IP address of the server>"

如有任何帮助,我们将不胜感激

kubernetes deployment
1个回答
1
投票

规范中的 yaml 缩进错误,请尝试:

--- 
apiVersion: apps/v1
kind: Deployment
metadata: 
  labels: 
    app: darwin_tritron_server
  name: darwin_tritron_server
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: darwin_tritron_server
  template: 
    metadata: 
      labels: 
        app: darwin_tritron_server
    spec: 
      containers: 
      - args: 
        - "cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models --allow-gpu-metrics=false --strict-model-config=false"
        command: 
        - /bin/sh
        - "-c"
        image: "nvcr.io/nvidia/tritonserver:20.12-py3"
        name: flower
        ports: 
        - containerPort: 8000
          name: http-triton
        - containerPort: 8001
          name: grpc-triton
        - containerPort: 8002
          name: metrics-triton
        resources: 
          limits: 
            nvidia.com/mig-1g.5gb: 1
        volumeMounts: 
        - mountPath: /models
          name: models
      volumes: 
      - name: models
        nfs: 
          path: <path/to/flowerdemo/model/files>
          readOnly: false
          server: "<IP address of the server>"
© www.soinside.com 2019 - 2024. All rights reserved.