aws-load-balancer-controller 将不会在 Fargate 上启动

问题描述 投票:0回答:2

我想在我的 EKS 集群上设置一个 aws-load-balancer-controller。但是当它想要启动时,它显示以下错误。

  Warning  FailedScheduling  109s  default-scheduler  0/2 nodes are available: 2 node(s) had untolerated taint {eks.amazonaws.com/compute-type: fargate}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..

将默认调度程序更改为 Fargate 调度程序后,我遇到了另一个错误。

`活动: 输入原因 留言年龄


警告 FailedScheduling 39s fargate-scheduler Fargate 配置文件配置错误:pod 没有配置文件标签 eks.amazonaws.com/fargate-profile`

但是fargate配置文件似乎是正确的。 Core-DNS 正在 Fargate 上工作,在同一命名空间中具有相同的配置文件。

这是部署 yaml:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "8"
    meta.helm.sh/release-name: aws-load-balancer-controller
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2023-07-13T10:14:08Z"
  generation: 7
  labels:
    app.kubernetes.io/instance: aws-load-balancer-controller
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-load-balancer-controller
    app.kubernetes.io/version: v2.5.3
    helm.sh/chart: aws-load-balancer-controller-1.5.4
  name: aws-load-balancer-controller
  namespace: kube-system
  resourceVersion: "40259"
  uid: abde0d23-8759-45a9-8bf9-a53089ec567f
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: aws-load-balancer-controller
      app.kubernetes.io/name: aws-load-balancer-controller
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2023-07-13T14:38:36+02:00"
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: aws-load-balancer-controller
        app.kubernetes.io/name: aws-load-balancer-controller
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - aws-load-balancer-controller
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - args:
        - --cluster-name=dsa-ph-dev-cluster
        - --ingress-class=alb
        image: public.ecr.aws/eks/aws-load-balancer-controller:v2.5.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /healthz
            port: 61779
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: aws-load-balancer-controller
        ports:
        - containerPort: 9443
          name: webhook-server
          protocol: TCP
        - containerPort: 8080
          name: metrics-server
          protocol: TCP
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp/k8s-webhook-server/serving-certs
          name: cert
          readOnly: true
      dnsPolicy: ClusterFirst
      priorityClassName: system-cluster-critical
      restartPolicy: Always
      schedulerName: fargate-scheduler
      securityContext:
        fsGroup: 65534
      serviceAccount: aws-load-balancer-controller
      serviceAccountName: aws-load-balancer-controller
      terminationGracePeriodSeconds: 10
      volumes:
      - name: cert
        secret:
          defaultMode: 420
          secretName: aws-load-balancer-tls
status:
  collisionCount: 1
  conditions:
  - lastTransitionTime: "2023-07-13T10:14:08Z"
    lastUpdateTime: "2023-07-13T10:14:08Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2023-07-13T12:48:38Z"
    lastUpdateTime: "2023-07-13T12:48:38Z"
    message: ReplicaSet "aws-load-balancer-controller-8dcdd998d" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 7
  replicas: 3
  unavailableReplicas: 3
  updatedReplicas: 1

以前有人遇到过这个错误吗?

谢谢。

amazon-web-services kubernetes amazon-eks aws-fargate aws-application-load-balancer
2个回答
1
投票

Fargate 配置文件有一个选择器,例如命名空间、标签或两者。如果您想使用 Fargate 运行应用程序,它需要一个匹配的选择器。例如,如果您的 Fargate 配置文件具有名为 Fargate 的命名空间的命名空间选择器,则部署到该命名空间的所有 pod 都将作为 Fargate pod 运行。您的部署正在尝试将负载均衡器控制器部署到 kube-system 命名空间中。您的 Fargate 配置文件是否配置为使用 kube-system 作为其命名空间选择器?该错误似乎表明您的 Pod 中缺少标签

eks.amazonaws.com/fargate-profile


0
投票

确保您已创建包含

kube-system
命名空间的 Fargate 配置文件。然后使用以下格式重新安装 aws-load-balancer-controller:

helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=eks-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=YOUR-REGION \
  --set vpcId=vpc-EXAMPLE13feb73 

以下是 Fargate 的强制要求:

--set region=YOUR-REGION
--set vpcId=vpc-EXAMPLE13feb73
© www.soinside.com 2019 - 2024. All rights reserved.