在工作节点和控制平面之间引入代理之后,状态为未就绪的Kubernetes工作节点

问题描述 投票:1回答:1

我已将kubernetes设置为kubeamd集群;一个控制平面和一个工作节点。

一切正常。然后我在工作程序节点上设置了一个Squid代理,并在kubelet配置中设置了http_proxy=http://127.0.0.1:3128,实际上是在要求kubelet使用该代理与控制平面进行通信。

[我看到使用tcpdump,网络数据包从工作程序节点降落在控制平面上,并且我也能够从工作程序发出以下命令;

kubectl get no --server=https://10.128.0.63:6443
NAME        STATUS     ROLES    AGE    VERSION
k8-cp       Ready      master   6d6h   v1.17.0
k8-worker   NotReady   <none>   6d6h   v1.17.2

但是工作人员状态始终保持为未就绪。我可能做错了什么?

我在这里使用Flannel进行联网。

P.S。在发出]之前,我也已将http_proxy=http://127.0.0.1:3128导出为环境变量。

kubectl get no --server=https://10.128.0.63:6443

来自工作程序节点。

如果这里很重要,则是节点状态;

kubectl  describe no k8-worker
Name:               k8-worker
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8-worker
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:04:d6:53:ef:cc"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.128.0.71
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 29 Jan 2020 08:08:33 +0000
Taints:             node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8-worker
  AcquireTime:     <unset>
  RenewTime:       Thu, 30 Jan 2020 11:51:24 +0000
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Thu, 30 Jan 2020 11:48:25 +0000   Thu, 30 Jan 2020 11:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.128.0.71
  Hostname:    k8-worker
Capacity:
  cpu:                2
  ephemeral-storage:  104844988Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7493036Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  96625140781
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7390636Ki
  pods:               110
System Info:
  Machine ID:                 3221f625fa75d20f08bceb4cacf74e20
  System UUID:                6DD87A9F-7F72-5326-5B84-1B3CBC4D9DBE
  Boot ID:                    7412bb51-869f-40de-8b37-dcbad6bf84b4
  Kernel Version:             3.10.0-1062.9.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://1.13.1
  Kubelet Version:            v1.17.2
  Kube-Proxy Version:         v1.17.2
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (3 in total)
  Namespace                   Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                           ------------  ----------  ---------------  -------------  ---
  default                     nginx-86c57db685-fvh28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d20h
  kube-system                 kube-flannel-ds-amd64-b8vbr    100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6d23h
  kube-system                 kube-proxy-rsr7l               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d23h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (5%)  100m (5%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:              <none>

链接到工作人员的kubelet登录:

https://pastebin.com/E90FNEXR

我已经用kubeamd建立了一个kubernetes集群;一个控制平面和一个工作节点。一切正常。然后在工作节点上设置了一个Squid代理,并在kubelet配置中设置了...

kubernetes kubeadm
1个回答
2
投票

Kube-controller-manager / node-controller负责监视监视由kubelet公开的端点“ / healthz”

© www.soinside.com 2019 - 2024. All rights reserved.