kubeadm迁移计划“无法获取组件配置”

问题描述 投票:0回答:1

我刚刚将集群从1.16迁移到1.17.5。现在,我想将其迁移到1.18.2(最新版本)。

但是第一步失败(kubeadm迁移计划)。

似乎我的kubeadm-config configmap缺少一些值,但我不知道哪个值。我检查了kubeadm-config configmap,对于1.17.5版本,值确定。

任何主意吗?

# kubeadm upgrade plan --v=5
I0507 14:16:12.685214   16010 plan.go:67] [upgrade/plan] verifying health of cluster
I0507 14:16:12.685280   16010 plan.go:68] [upgrade/plan] retrieving configuration from cluster
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
invalid configuration: kind and apiVersion is mandatory information that needs to be specified in all YAML documents
failed to get component configs
k8s.io/kubernetes/cmd/kubeadm/app/util/config.getInitConfigurationFromCluster
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:104
k8s.io/kubernetes/cmd/kubeadm/app/util/config.FetchInitConfigurationFromCluster
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:97
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:55
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
[upgrade/config] FATAL
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:55
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

kubeadm-config configmap的内容

apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      certSANs:
      - kubernetes
      - kubernetes.default
      - kubernetes.default.svc
      - kubernetes.default.svc.my-cluster
      - 10.0.22.1
      - localhost
      - 127.0.0.1
      - master1.my-cluster
      - master2.my-cluster
      - master3.my-cluster
      - lb-apiserver.kubernetes.local
      - xxx.xxx.xxx.1
      - xxx.xxx.xxx.3
      - xxx.xxx.xxx.2
      extraArgs:
        allow-privileged: "true"
        anonymous-auth: "True"
        apiserver-count: "3"
        authorization-mode: Node,RBAC
        bind-address: 0.0.0.0
        enable-aggregator-routing: "False"
        endpoint-reconciler-type: lease
        insecure-port: "0"
        kubelet-preferred-address-types: InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
        profiling: "False"
        request-timeout: 1m0s
        runtime-config: ""
        service-node-port-range: 30000-32767
        storage-backend: etcd3
      extraVolumes:
      - hostPath: /etc/pki/tls
        mountPath: /etc/pki/tls
        name: etc-pki-tls
        readOnly: true
      - hostPath: /etc/pki/ca-trust
        mountPath: /etc/pki/ca-trust
        name: etc-pki-ca-trust
        readOnly: true
      timeoutForControlPlane: 5m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/ssl
    clusterName: my-cluster
    controlPlaneEndpoint: xxx.xxx.xxx.1:6443
    controllerManager:
      extraArgs:
        bind-address: 0.0.0.0
        configure-cloud-routes: "false"
        node-cidr-mask-size: "24"
        node-monitor-grace-period: 40s
        node-monitor-period: 5s
        pod-eviction-timeout: 5m0s
        profiling: "False"
        terminated-pod-gc-threshold: "12500"
    dns:
      imageRepository: docker.io/coredns
      imageTag: 1.6.5
      type: CoreDNS
    etcd:
      external:
        caFile: /etc/ssl/etcd/ssl/ca.pem
        certFile: /etc/ssl/etcd/ssl/node-node1.pem
        endpoints:
        - https://xxx.xxx.xxx.1:2379
        - https://xxx.xxx.xxx.3:2379
        - https://xxx.xxx.xxx.2:2379
        keyFile: /etc/ssl/etcd/ssl/node-node1-key.pem
    imageRepository: gcr.io/google-containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.17.5
    networking:
      dnsDomain: my-cluster
      podSubnet: 10.0.20.0/24
      serviceSubnet: 10.0.22.0/24
    scheduler:
      extraArgs:
        bind-address: 0.0.0.0
  ClusterStatus: |
    apiEndpoints:
      master1.my-cluster:
        advertiseAddress: xxx.xxx.xxx.1
        bindPort: 6443
      master2.my-cluster:
        advertiseAddress: xxx.xxx.xxx.2
        bindPort: 6443
      master3.my-cluster:
        advertiseAddress: xxx.xxx.xxx.3
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2019-10-16T00:57:59Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "57269932"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: 84cece40-38f9-4c82-8844-3f8c29089d7d

kubernetes kubeadm
1个回答
0
投票

最后找到错误的来源。在kubelet-config configmap中缺少了kind和apiVersion,而不是kubeadm-config之一。归档后就可以了。我打开了一个功能请求,以提供有关该配置的更多调试信息,从而提高了错误率(https://github.com/kubernetes/kubernetes/issues/91022)。

© www.soinside.com 2019 - 2024. All rights reserved.