请求访问Kubernetes clusterIP服务时超时

问题描述 投票:6回答:4

我正在寻求帮助以解决这种无法正常工作的基本情况:

在MacBook上运行的VirtualBox VM上的kubeadm上安装了三个节点:

sudo kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
kubernetes-master   Ready     master    4h        v1.10.2
kubernetes-node1    Ready     <none>    4h        v1.10.2
kubernetes-node2    Ready     <none>    34m       v1.10.2

Virtualbox VM具有2个适配器:1)仅主机2)NAT。来宾计算机的节点IP为:

kubernetes-master (192.168.56.3)
kubernetes-node1  (192.168.56.4)
kubernetes-node2  (192.168.56.5)

我正在使用法兰绒荚网(我之前也尝试过Calico,结果相同)。

安装主节点时,我使用了此命令:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.3

我部署了一个Nginx应用程序,其Pod已启动,每个节点一个Pod:

nginx-deployment-64ff85b579-sk5zs   1/1       Running   0          14m       10.244.2.2   kubernetes-node2
nginx-deployment-64ff85b579-sqjgb   1/1       Running   0          14m       10.244.1.2   kubernetes-node1

我将它们作为ClusterIP服务公开:

sudo kubectl get services 
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP   22m
nginx-deployment   ClusterIP   10.98.206.211   <none>        80/TCP    14m

现在是问题:

我使用ssh进入kubernetes-node1并使用群集IP卷曲服务:

ssh 192.168.56.4
---
curl 10.98.206.211

有时请求可以正常进行,返回nginx欢迎页面。我在日志中看到,该请求始终由同一节点(kubernetes-node1)中的pod回答。其他一些请求将被阻塞,直到超时。我猜这是发送到另一个节点(kubernetes-node2)中的pod。

反之亦然,当ssh进入kubernetes-node2时,来自该节点的pod记录成功的请求,而其他节点则超时。

我似乎存在某种网络问题,节点无法访问其他节点的Pod。我该如何解决?

UPDATE:

我将副本数缩减为1,因此kubernetes-node2上只有一个Pod

如果我将ssh放入kubernetes-node2,所有的卷发都会很好。在kubernetes-node1中时,所有请求都超时。

UPDATE 2:

kubernetes-master ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::20a0:c7ff:fe6f:8271  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)
        RX packets 10478  bytes 2415081 (2.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11523  bytes 2630866 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:cd:ce:84:a9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.3  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe2d:298f  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:2d:29:8f  txqueuelen 1000  (Ethernet)
        RX packets 20784  bytes 2149991 (2.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26567  bytes 26397855 (26.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fe09:f08a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:09:f0:8a  txqueuelen 1000  (Ethernet)
        RX packets 12662  bytes 12491693 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4507  bytes 297572 (297.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c078:65ff:feb9:e4ed  prefixlen 64  scopeid 0x20<link>
        ether c2:78:65:b9:e4:ed  txqueuelen 0  (Ethernet)
        RX packets 6  bytes 444 (444.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 444 (444.0 B)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 464615  bytes 130013389 (130.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 464615  bytes 130013389 (130.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethb1098eb3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::d8a3:a2ff:fedf:4d1d  prefixlen 64  scopeid 0x20<link>
        ether da:a3:a2:df:4d:1d  txqueuelen 0  (Ethernet)
        RX packets 10478  bytes 2561773 (2.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11538  bytes 2631964 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubernetes-node1 ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::5cab:32ff:fe04:5b89  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:01:01  txqueuelen 1000  (Ethernet)
        RX packets 199  bytes 41004 (41.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 331  bytes 56438 (56.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:0f:02:bb:ff  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.4  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe36:741a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:36:74:1a  txqueuelen 1000  (Ethernet)
        RX packets 12834  bytes 9685221 (9.6 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9114  bytes 1014758 (1.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:feb2:23a3  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b2:23:a3  txqueuelen 1000  (Ethernet)
        RX packets 13263  bytes 12557808 (12.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5065  bytes 341321 (341.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::7815:efff:fed6:1423  prefixlen 64  scopeid 0x20<link>
        ether 7a:15:ef:d6:14:23  txqueuelen 0  (Ethernet)
        RX packets 483  bytes 37506 (37.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 483  bytes 37506 (37.5 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3072  bytes 269588 (269.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3072  bytes 269588 (269.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth153293ec: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::70b6:beff:fe94:9942  prefixlen 64  scopeid 0x20<link>
        ether 72:b6:be:94:99:42  txqueuelen 0  (Ethernet)
        RX packets 81  bytes 19066 (19.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 129  bytes 10066 (10.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubernetes-node2 ifconfig

cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::4428:f5ff:fe8b:a76b  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:02:01  txqueuelen 1000  (Ethernet)
        RX packets 184  bytes 36782 (36.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 284  bytes 36940 (36.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:7f:e9:79:cd  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.5  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:feb7:ff54  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b7:ff:54  txqueuelen 1000  (Ethernet)
        RX packets 12634  bytes 9466460 (9.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8961  bytes 979807 (979.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fed8:9210  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:d8:92:10  txqueuelen 1000  (Ethernet)
        RX packets 12658  bytes 12491919 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4544  bytes 297215 (297.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c832:e4ff:fe3e:f616  prefixlen 64  scopeid 0x20<link>
        ether ca:32:e4:3e:f6:16  txqueuelen 0  (Ethernet)
        RX packets 111  bytes 8466 (8.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111  bytes 8466 (8.4 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2940  bytes 258968 (258.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2940  bytes 258968 (258.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

UPDATE 3:

Kubelet日志:

kubernetes-master kubelet logs

kubernetes-node1 kubelet logs

kubernetes-node2 kubelet logs

IP路由

Master

kubernetes-master:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3 

Node1

kubernetes-node1:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.4 

Node2

kubernetes-node2:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5

iptables-save:

kubernetes-master iptables-save

kubernetes-node1 iptables-save

kubernetes-node2 iptables-save

kubernetes virtualbox kubeadm flannel kubernetes-service
4个回答
3
投票

我在使用Flannel的K8s群集中遇到了类似的问题。我已经为vms设置了NAT nic来实现Internet连接,并为Host-only nic设置了节点到节点的通信。 Flannel默认为节点到节点通信选择NAT nic,这显然在这种情况下不起作用。

我在部署之前修改了法兰绒清单以设置-iface = enp0s8应该选择的仅限主机nic的参数(在我的情况下为enp0s8)。在您的情况下,看来enp0s3是正确的NIC。之后,节点到节点的通信工作正常。

我没注意到我也修改了kube-proxy清单以包含-cluster-cidr = 10.244.0.0 / 16 --proxy-mode = iptables似乎也是必需的。


1
投票

iptables --flushiptables -tnat --flush刷新了所有防火墙,然后重启docker修复了它

check this github issue link


0
投票

基于您的日志以及您仅对使用Flannel的节点之间的连接存在问题的事实,我想您在安装过程中对Flannel CNI遇到了问题。

node1master的日志中,我看到以下消息:

Error adding network: open /run/flannel/subnet.env: no such file or directory
Error while adding to cni network: open /run/flannel/subnet.env: no such file or directory

根本原因可能是虚拟机之间的网络问题。

我建议您为群集中的每个实例创建2个网络-一个具有NAT的网络以访问Internet,而另一个则仅用于主机的集群内通信。

作为一种替代方法-如果网络允许,可以将Bridge模式用于VM的接口。

最后,我可以提供的唯一建议-使用上述配置,删除所有群集组件并再次初始化群集。那是最快的方法。


0
投票

在带有绒布的raspberrypi群集上原始安装kubernetes之后,我遇到了相同的问题。

解决方案是禁用ufw防火墙。

© www.soinside.com 2019 - 2024. All rights reserved.