关于 dns 问题 I/O 超时的 kubernetes pod 创建问题

问题描述 投票:0回答:1

最近我在 ubuntu 22.04 中使用 k8s。 我已经加入了 2 个带有控制平面的工作节点。 Worker节点相关Pod已启动成功。但其他worker节点的pod还没有启动。它不断返回

ImagePullBackOff
错误。

这是错误-

samrat@k8smaster01:~$ kubectl get pods -A
NAMESPACE     NAME                                                    READY   STATUS                  RESTARTS       AGE
kube-system   calico-kube-controllers-57758d645c-f5kn7                1/1     Running                 0              67m
kube-system   calico-node-2xndp                                       0/1     Init:ImagePullBackOff   0              12m
kube-system   calico-node-m2hxw                                       1/1     Running                 0              67m
kube-system   calico-node-phbxs                                       1/1     Running                 0              66m
kube-system   coredns-76f75df574-8d75k                                1/1     Running                 0              135m
kube-system   coredns-76f75df574-vkdlb                                1/1     Running                 0              135m
kube-system   etcd-k8smaster01.pubalibank.com.bd                      1/1     Running                 31 (98m ago)   135m
kube-system   kube-apiserver-k8smaster01.pubalibank.com.bd            1/1     Running                 3 (98m ago)    135m
kube-system   kube-controller-manager-k8smaster01.pubalibank.com.bd   1/1     Running                 27 (98m ago)   135m
kube-system   kube-proxy-4mk2r                                        0/1     ErrImagePull            0              8m21s
kube-system   kube-proxy-8fzxk                                        1/1     Running                 0              66m
kube-system   kube-proxy-rt8z5                                        1/1     Running                 1 (98m ago)    135m
kube-system   kube-scheduler-k8smaster01.pubalibank.com.bd            1/1     Running                 1 (98m ago)    135m

这是有问题的 pod 的事件。

Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  9m56s                   default-scheduler  Successfully assigned kube-system/kube-proxy-4mk2r to k8sworker02.pubalibank.com.bd
  Warning  Failed     9m40s                   kubelet            Failed to pull image "registry.k8s.io/kube-proxy:v1.29.3": failed to pull and unpack image "registry.k8s.io/kube-proxy:v1.29.3": failed to resolve reference "registry.k8s.io/kube-proxy:v1.29.3": failed to do request: Head "https://registry.k8s.io/v2/kube-proxy/manifests/v1.29.3": dial tcp: lookup registry.k8s.io on 127.0.0.53:53: read udp 127.0.0.1:36562->127.0.0.53:53: i/o timeout
  Warning  Failed     9m9s                    kubelet            Failed to pull image "registry.k8s.io/kube-proxy:v1.29.3": failed to pull and unpack image "registry.k8s.io/kube-proxy:v1.29.3": failed to resolve reference "registry.k8s.io/kube-proxy:v1.29.3": failed to do request: Head "https://registry.k8s.io/v2/kube-proxy/manifests/v1.29.3": dial tcp: lookup registry.k8s.io on 127.0.0.53:53: read udp 127.0.0.1:50081->127.0.0.53:53: i/o timeout
  Warning  Failed     8m22s                   kubelet            Failed to pull image "registry.k8s.io/kube-proxy:v1.29.3": failed to pull and unpack image "registry.k8s.io/kube-proxy:v1.29.3": failed to resolve reference "registry.k8s.io/kube-proxy:v1.29.3": failed to do request: Head "https://registry.k8s.io/v2/kube-proxy/manifests/v1.29.3": dial tcp: lookup registry.k8s.io on 127.0.0.53:53: read udp 127.0.0.1:41785->127.0.0.53:53: i/o timeout
  Normal   Pulling    7m35s (x4 over 10m)     kubelet            Pulling image "registry.k8s.io/kube-proxy:v1.29.3"
  Warning  Failed     7m15s (x4 over 9m40s)   kubelet            Error: ErrImagePull
  Warning  Failed     7m15s                   kubelet            Failed to pull image "registry.k8s.io/kube-proxy:v1.29.3": failed to pull and unpack image "registry.k8s.io/kube-proxy:v1.29.3": failed to resolve reference "registry.k8s.io/kube-proxy:v1.29.3": failed to do request: Head "https://registry.k8s.io/v2/kube-proxy/manifests/v1.29.3": dial tcp: lookup registry.k8s.io on 127.0.0.53:53: read udp 127.0.0.1:52376->127.0.0.53:53: i/o timeout
  Warning  Failed     7m4s (x6 over 9m40s)    kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m46s (x14 over 9m40s)  kubelet            Back-off pulling image "registry.k8s.io/kube-proxy:v1.29.3"

控制平面resolve.conf的内容-

nameserver 127.0.0.53
options edns0 trust-ad
search .

这是有问题的节点的resolve.conf-

nameserver 127.0.0.53
options edns0 trust-ad
search .

我这是一个与 DNS 相关的问题。但无法弄清楚问题所在。预先感谢。

kubernetes dns kubernetes-dns
1个回答
0
投票

检查该特定节点中的容器运行时状态,如果是容器,您可以通过运行此命令来检查状态

sudo systemctl 状态containerd.service

查看containerd服务是否已启动并正在运行。

由于所有与集群相关的应用程序都作为 Pod 运行,因此节点似乎在从注册表中提取 kube-proxy 映像时遇到问题。 kube-proxy 用于在节点内创建与网络相关的内容的 ip 表。看来您已将 calico 用于 CNI(容器网络接口),希望您没有编辑该 yaml 文件。检查容器运行时。如果它没有运行,您需要删除containerd并重新安装。 Kubernetes 版本应该与每个节点(kubelet、kubeadm 和 kubectl)匹配,请尝试重新安装这些节点。您可以将版本设置为环境变量并像这样使用它

**版本=1.29.1-1.1 sudo apt-get install -y kubelet=$版本 kubeadm=$版本 kubectl=$版本 sudo apt-mark 保留 kubelet kubeadm kubectl containerd **

© www.soinside.com 2019 - 2024. All rights reserved.