kubeadm在裸机上聚集Kubernetes

问题描述 投票:0回答:1

我正在尝试在Debian 10上运行的3个裸机节点(1个主节点和2个工人)上使用kubeadm创建单个控制平面群集,并使用Docker作为容器运行时。每个节点都有一个外部IP和内部IP。我想在内部网络上配置一个群集,并且可以从Internet访问它。为此使用了此命令(如果出现问题,请纠正我):

kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-cert-extra-sans={public_DNS_name},10.10.0.1 --pod-network-cidr=192.168.0.0/16

我知道了:

kubectl get no -o wide
NAME                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
dev-k8s-master-0.public.dns    Ready    master   16h   v1.18.2   10.10.0.1     <none>        Debian GNU/Linux 10 (buster)   4.19.0-8-amd64   docker://19.3.8

初始化阶段成功完成,可以从Internet访问群集。除应在联网后应运行的coredns之外,所有pod均已启动并正在运行。

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

应用网络后,coredns pod仍未准备好:

kubectl get po -A
NAMESPACE     NAME                                                   READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-75d56dfc47-g8g9g               0/1     CrashLoopBackOff   192        16h
kube-system   calico-node-22gtx                                      1/1     Running            0          16h
kube-system   coredns-66bff467f8-87vd8                               0/1     Running            0          16h
kube-system   coredns-66bff467f8-mv8d9                               0/1     Running            0          16h
kube-system   etcd-dev-k8s-master-0                                  1/1     Running            0          16h
kube-system   kube-apiserver-dev-k8s-master-0                        1/1     Running            0          16h
kube-system   kube-controller-manager-dev-k8s-master-0               1/1     Running            0          16h
kube-system   kube-proxy-lp6b8                                       1/1     Running            0          16h
kube-system   kube-scheduler-dev-k8s-master-0                        1/1     Running            0          16h

某些来自失败的Pod的日志:

kubectl -n kube-system logs calico-kube-controllers-75d56dfc47-g8g9g
2020-04-22 08:24:55.853 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2020-04-22 08:24:55.855 [INFO][1] k8s.go 228: Using Calico IPAM
W0422 08:24:55.855525       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-04-22 08:24:55.856 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-04-22 08:25:05.857 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-04-22 08:25:05.857 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded

coredns:

[INFO] plugin/ready: Still waiting on: "kubernetes"
I0422 08:29:12.275344       1 trace.go:116] Trace[1050055850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.274382393 +0000 UTC m=+59491.429700922) (total time: 30.000897581s):
Trace[1050055850]: [30.000897581s] [30.000897581s] END
E0422 08:29:12.275388       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.276163       1 trace.go:116] Trace[188478428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.275499997 +0000 UTC m=+59491.430818380) (total time: 30.000606394s):
Trace[188478428]: [30.000606394s] [30.000606394s] END
E0422 08:29:12.276198       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.277424       1 trace.go:116] Trace[16697023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.276675998 +0000 UTC m=+59491.431994406) (total time: 30.000689778s):
Trace[16697023]: [30.000689778s] [30.000689778s] END
E0422 08:29:12.277452       1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

有什么想法吗?

kubernetes kubeadm calico bare-metal-server
1个回答
0
投票

此答案是要引起注意@florin建议:

当节点上有多个公共接口并且calico选择了错误的公共接口时,我已经看到了类似的行为。

我所做的是在印花布配置中设置IP_AUTODETECT_METHOD

用于自动检测该主机的IPv4地址的方法。仅在自动检测IPv4地址时使用。有关有效方法的详细信息,请参见IP自动检测方法。

此处了解更多:https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods

© www.soinside.com 2019 - 2024. All rights reserved.