如何在 Helm 中正确设置健康和活力探针

问题描述 投票:0回答:1

作为解决更复杂问题的垫脚石,我一直在遵循以下示例:https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/,一步一步。我一直在尝试学习的下一步是使用 Helm 文件而不是 makefile 来部署 Golang 服务。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .ServiceName }}
  labels:
    app: {{ .ServiceName }}
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: {{ .ServiceName }}
    spec:
      containers:
      - name: {{ .ServiceName }}
        image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8000
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8000
        resources:
          limits:
            cpu: 10m
            memory: 30Mi
          requests:
            cpu: 10m
            memory: 30Mi
      terminationGracePeriodSeconds: 30

到一个看起来像这样的 helmDeployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "mychart.fullname" . }}
  labels:
    {{- include "mychart.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "mychart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        ano 5.4                                        mychart/templates/deployment.yaml
        {{- include "mychart.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "mychart.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8000
              protocol: TCP
          livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
    {{- end }}

但是,当我运行 helm 图表时,探针(不使用 helm 时工作得很好)失败并出现错误 - 特别是在描述 pod 时,我收到错误“警告不健康 16 秒(x3 超过 24 秒)kubelet 就绪探针失败: HTTP 探测失败,状态代码:503” 我显然在舵图上设置了错误的探头。如何将这些探针从一个系统转换到另一个系统?

kubernetes kubernetes-helm k3s
1个回答
1
投票

解决方案: 我找到的解决方案是 Helm Charts 中的探测是初始时间延迟。当我更换时

         livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000

livenessProbe:
  httpGet:
    path: /healthz
    port: 8000
  initialDelaySeconds: 15
readinessProbe:
  httpGet:
    path: /health
    port: 8000
  initialDelaySeconds: 15

由于探测器在容器完全启动之前运行,因此它们会自动得出失败的结论。

© www.soinside.com 2019 - 2024. All rights reserved.