kubeadm加入失败,http:// localhost:10248 / healthz连接被拒绝

问题描述 投票:2回答:1

我正在尝试在三个虚拟机上设置kubernetes(来自centos7的教程),不幸的是,工作者的加入失败了。我希望有人已经遇到过这个问题(在网上发现了两次没有答案),或者可能猜到了什么问题。

以下是我通过kubeadm join获得的内容:

[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0902 20:31:15.401693    2032 kernel_validator.go:81] Validating kernel version
I0902 20:31:15.401768    2032 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.30:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
[discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
[discovery] Successfully established connection with API Server "192.168.1.30:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.

虽然kublet正在运行:

[root@k8s-worker1 nodesetup]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 2093 (kubelet)
    Tasks: 7
   Memory: 12.1M
   CGroup: /system.slice/kubelet.service
           └─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010    2093 server.go:408] Version: v1.11.2
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314    2093 plugins.go:97] No cloud provider specified.
[root@k8s-worker1 nodesetup]# 

据我所知,worker可以连接到master,但是它试图在一些尚未出现的本地servlet上运行healthcheck。有任何想法吗?

以下是我配置工作人员的方法:

exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux


echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload


echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables


echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"


echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"

echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
#  old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

在此先感谢您的帮助。

更新我

来自工人的Journalctl输出:

[root@k8s-worker1 ~]# journalctl -xeu kubelet
Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has begun starting up.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059    3082 server.go:408] Version: v1.11.2
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214    3082 plugins.go:97] No cloud provider specified.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469    3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.

并且主要方面的获取pod导致:

[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-79n2m             0/1       Pending   0          1d
kube-system   coredns-78fcdf6894-tlngr             0/1       Pending   0          1d
kube-system   etcd-k8s-master                      1/1       Running   3          1d
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1d
kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          1d
kube-system   kube-proxy-2x8cx                     1/1       Running   3          1d
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d
[root@k8s-master ~]# 

更新II作为下一步,我在主端生成了一个新令牌,并在join命令中使用了这个令牌。虽然主令牌列表将令牌显示为有效令牌,但是工作节点仍然坚持主令牌不知道该令牌或者它已过期....停止!从主设置开始,时间从头开始。

所以这就是我做的:

1)重置主VM,意味着在virtualbox上安装了新的centos7(CentOS-7-x86_64-Minimal-1804.iso)。配置网络von virtualbox:adapter1作为主机系统的NAT(用于安装东西)和adapter2作为内部网络(与kubernetes网络的主节点和工作节点相同)。

2)安装了新映像后,基本接口enp0s3未配置为在引导时运行(因此ifup enp03s,并在/ etc / sysconfig / network-script中重新配置以在引导时运行)。

3)为内部kubernetes网络配置第二个接口:

/ etc / hosts中:

#!/bin/sh
echo '192.168.1.30 k8s-master' >> /etc/hosts
echo '192.168.1.40 k8s-worker1' >> /etc/hosts
echo '192.168.1.50 k8s-worker2' >> /etc/hosts

通过“ip -color -human addr”识别我的第二个界面,在我的情况下向我展示了enp0S8,所以:

#!/bin/sh
echo "Setting up internal Interface"
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
IPADDR=192.168.1.30
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=enp0s8
EOF

echo "Activate interface"
ifup enp0s8

4)主机名,交换,禁用SELinux

#!/bin/sh
echo "Setting hostname und deactivate SELinux"
hostnamectl set-hostname 'k8s-master'
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

echo "disable swap"
swapoff -a

echo "### You need to edit /etc/fstab and comment the swapline!! ###"

这里有一些评论:我重新启动,因为我看到后来的预检检查似乎解析/ etc / fstab以查看交换不存在。此外,似乎centos重新激活SElinux(我需要稍后检查)作为解决方法我在每次重启后再次禁用它。

5)建立需要的防火墙设置

#!/bin/sh
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload

echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

6)添加kubernetes存储库

#!/bin/sh
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

7)安装所需的包并配置服务

#!/bin/sh

echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2

echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

echo "install docker ce"
yum install -y docker-ce

echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y

echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet

echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
#  old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload
systemctl restart kubelet

# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

# Make changes effective
sudo sysctl --system

8)初始化集群

#!/bin/sh
echo "Init kubernetes. Check join cmd in initProtocol.txt"
kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt

要验证此处是此命令的结果:

Init kubernetes. Check join cmd in initProtocol.txt
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0904 21:53:15.271999    1526 kernel_validator.go:81] Validating kernel version
I0904 21:53:15.272165    1526 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.504792 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172

备注:到目前为止对我来说看起来不错,虽然我有点担心最新的docker-ce版本可能会在这里引起麻烦......

9)部署pod网络

#!/bin/bash

echo "Configure demo cluster usage as root"
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# Deploy-Network using flanel
# Taken from first matching two tutorials on the web
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

echo "Try to run kubectl get pods --all-namespaces"
echo "After joining nodes: try to run kubectl get nodes to verify the status"

这是这个命令的输出:

Configure demo cluster usage as root
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Try to run kubectl get pods --all-namespaces
After joining nodes: try to run kubectl get nodes to verify the status

所以我尝试了kubectl获取pods --all-namespaces,我得到了

[root@k8s-master nodesetup]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          33m
kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          33m
kube-system   etcd-k8s-master                      1/1       Running   0          27m
kube-system   kube-apiserver-k8s-master            1/1       Running   0          27m
kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          27m
kube-system   kube-proxy-stfxm                     1/1       Running   0          28m
kube-system   kube-scheduler-k8s-master            1/1       Running   0          27m

[root@k8s-master nodesetup]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
k8s-master   NotReady   master    35m       v1.11.2

嗯......我的主人怎么了?

一些观察:

有时我在开始运行kubectl时拒绝连接,我发现在服务建立之前需要几分钟。但正因为如此,我在/ var / log / firewalld中找到了很多这些:

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.

2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).

错误的码头版? docker安装设置似乎已被破坏。

我可以在主人那边查看其他任何东西...它会迟到 - 明天我想再次加入我的工作人员(在最初的令牌期间的24小时范围内)。

更新III(解决了docker问题后)

[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          10h
kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          10h
kube-system   etcd-k8s-master                      1/1       Running   0          10h
kube-system   kube-apiserver-k8s-master            1/1       Running   0          10h
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          10h
kube-system   kube-flannel-ds-amd64-crljm          0/1       Pending   0          1s
kube-system   kube-flannel-ds-v6gcx                0/1       Pending   0          0s
kube-system   kube-proxy-l2dck                     0/1       Pending   0          0s
kube-system   kube-scheduler-k8s-master            1/1       Running   0          10h
[root@k8s-master ~]# 

大师现在看起来很开心

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    10h       v1.11.2
[root@k8s-master ~]# 

请继续关注...下班后我也在修复工作人员的docker / firewall,并尝试再次加入集群(现在知道如何在需要时发出新令牌)。因此,更新IV将在大约10小时后跟进

kubernetes kubeadm kubernetes-health-check
1个回答
1
投票

根据附加的kubeadm token日志,您的kubelet似乎已过期。

9月02日21:19:56 k8s-worker1 kubelet [3082]:F0902 21:19:56.814469 3082 server.go:262]无法运行Kubelet:无法创建证书签名请求:未经授权

kubeadm init发布命令后24小时,此令牌的TTL仍然存在,请查看此link以获取更多信息。

主节点的系统运行时组件看起来不健康,不确定群集是否可以正常运行。虽然CoreDNS服务处于待定状态,但请查看kubeadm故障排除document,以检查是否已在您的群集上安装任何Pod network提供程序。

我建议重建集群,以便从头开始刷新kubeadm token和bootstrap集群系统模块。

© www.soinside.com 2019 - 2024. All rights reserved.