将Kubernetes从1.17.5升级到1.17.6失败

问题描述 投票:0回答:1

我正在尝试通过多主机设置(使用keepalived和haproxy)进行k8s升级:

kubeadm upgrade apply v1.17.6

来自v1.17.5。但是,我遇到以下错误:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.6"
[upgrade/versions] Cluster version: v1.17.5
[upgrade/versions] kubeadm version: v1.17.6
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.6"...
Static pod: kube-apiserver-k8s-master02 hash: bb5f938cbb69a4e3c96fbcc0f2e49e19
Static pod: kube-controller-manager-k8s-master02 hash: a036029e87d630db24866f0abb3f8853
Static pod: kube-scheduler-k8s-master02 hash: b95be9ddff954501b1709fd9f2e0a160
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.17.6" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests962117447"
W0528 22:04:28.039648   13824 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-05-28-22-04-25/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master02 hash: bb5f938cbb69a4e3c96fbcc0f2e49e19
Static pod: kube-apiserver-k8s-master02 hash: 485fbbf3f31c5f40e0e22cc118531d44
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

详细显示:

timed out waiting for the condition
couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:515
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:255
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.StaticPodControlPlane
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:472
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformStaticPodUpgrade
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:606
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.PerformControlPlaneUpgrade
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:224
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runApply
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:164
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdApply.func1
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:79
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:203
runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1357
[upgrade/apply] FATAL
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runApply
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:165
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdApply.func1
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/apply.go:79
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/anago-v1.17.6-beta.0.42+fd4285294a6370/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:203
runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1357

有人能指出我正确的方向吗?

kubernetes upgrade kubeadm
1个回答
0
投票

参考docs,您需要先排空控制平面节点,然后运行kubeadm升级。

kubectl drain <cp-node-name> --ignore-daemonsets
© www.soinside.com 2019 - 2024. All rights reserved.