Kubeadm连接失败,并在设置节点注释处挂起,以启用卷控制器的附加/分离

问题描述 投票:0回答:1

我在让节点加入我的k8s集群时遇到问题。它以前在集群中,但现在拒绝加入。除了离开并重新加入集群之外,服务器本身没有任何变化。

  • Kubernetes版本:1.14.9beta(出于兼容kubespawner而从源代码进行编译)
  • 操作系统版本:Manjaro Openbox(所有系统处于相同的补丁程序级别)

以下是尝试加入的journalctl日志:

Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-con>Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubel>Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.856840   17701 server.go:418] Version: v1.14.9-beta.0.44+500f5aba80d712
Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.857493   17701 plugins.go:103] No cloud provider specified.
Nov 18 14:56:07 dragon-den kubelet[17701]: W1118 14:56:07.857566   17701 server.go:557] standalone mode, no API client
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.026479   17701 server.go:475] No api server defined - no events will be sent to API server.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.026530   17701 server.go:629] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027259   17701 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027322   17701 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027617   17701 container_manager_linux.go:286] Creating device plugin manager: true
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027819   17701 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036358   17701 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036403   17701 client.go:104] Start docker client with request timeout=2m0s
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.063851   17701 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.063918   17701 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.064142   17701 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.068056   17701 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.070412   17701 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.086972   17701 docker_service.go:258] Docker Info: &{ID:UZIS:BXME:6KSB:34PI:HKEZ:HRT6:I4XW:44VI:AR5M:Q4P7:EGFA:KRDD Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersS>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.087132   17701 docker_service.go:271] Setting cgroupDriver to systemd
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099887   17701 remote_runtime.go:62] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099950   17701 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100029   17701 remote_image.go:50] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100049   17701 remote_image.go:50] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100315   17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0  <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100356   17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0  <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100381   17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100416   17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100519   17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100575   17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100959   17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, READY
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.101021   17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, READY
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.454199   17701 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Nov 18 14:56:28 dragon-den kubelet[17701]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.488935   17701 kuberuntime_manager.go:210] Container runtime docker initialized, version: 19.03.4-ce, apiVersion: 1.40.0
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489074   17701 volume_host.go:77] kubeClient is nil. Skip initialization of CSIDriverLister
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489559   17701 csi_plugin.go:215] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509315   17701 server.go:1055] Started kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.509368   17701 kubelet.go:1387] No api server defined - no node status update will be sent.
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.509369   17701 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory ca>Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509441   17701 server.go:141] Starting to listen on 0.0.0.0:10250
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510125   17701 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510164   17701 status_manager.go:148] Kubernetes client is nil, not starting status manager.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510176   17701 kubelet.go:1808] Starting kubelet main sync loop.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510201   17701 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510197   17701 kubelet.go:1825] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]    Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510272   17701 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510546   17701 server.go:343] Adding debug handlers to kubelet server.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.610328   17701 kubelet.go:1825] skipping pod synchronization - container runtime status check may not have completed yet.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.667490   17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708302   17701 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708327   17701 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708362   17701 policy_none.go:42] [cpumanager] none policy: Start
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.710493   17701 reconciler.go:154] Reconciler: start to sync state
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.732840   17701 container.go:422] Failed to get RecentStats("/libcontainer_17701_systemd_test_default.slice") while determining the next housekeeping: unable to find data in memory>Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.783045   17701 manager.go:540] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.783441   17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach

我认为我的问题与server.go:475] No api server defined - no events will be sent to API server.有关或cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d,但我不确定100%。

节点配置:

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# Add your own!
KUBELET_ARGS=""
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

服务器配置:

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

# Add your own!
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
              --kubeconfig=/etc/kubernetes/kubelet.conf \
              --config=/var/lib/kubelet/config.yaml \
              --network-plugin=cni \
              --pod-infra-container-image=k8s.gcr.io/pause:3.1
              --cgroup-driver=systemd"
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
#KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

我也正在使用Calico进行网络连接,如果有关系的话

kubernetes kubectl kubeadm
1个回答
0
投票

这是从这个节点成为集群的一部分时,您已经遗留了一些东西。在大多数情况下,这不是强制性的,但是我强烈建议您在从群集中删除节点后,除kubeadm reset之外还运行一些命令来清理您的节点。

sudo kubeadm reset -f
sudo systemctl stop kubelet
sudo systemctl stop docker
sudo rm -rf /var/lib/cni/
sudo rm -rf /var/lib/kubelet/*
sudo m -rf /var/lib/etcd/*
sudo rm -rf /etc/cni/
sudo ifconfig cni0 down
sudo ifconfig flannel.1 down
sudo ifconfig docker0 down
sudo ip link delete cni0
sudo ip link delete flannel.1
sudo systemctl start docker

其中一些命令可能会失败,具体取决于您的实现。

我发现此script为此目的而创建。它有些旧,但是您可以从中取一些东西。

这里与您的Reset Kuberenetes Cluster类似。

© www.soinside.com 2019 - 2024. All rights reserved.