这些吊舱是否位于覆盖网络内?

问题描述 投票:0回答:3

如何确认此Kubernetes群集中的某些pod是否在Calico覆盖网络中运行?


Pod Names:

具体来说,当我运行kubectl get pods --all-namespaces时,结果列表中只有两个节点的名称中包含单词calico。其他豆荚,如etcdkube-controller-manager,以及其他豆荚的名字中都没有calico这个词。从我在网上看到的,其他豆荚的名字应该有calico这个词。

$ kubectl get pods --all-namespaces  

NAMESPACE     NAME                                                               READY   STATUS              RESTARTS   AGE  
kube-system   calico-node-l6jd2                                                  1/2     Running             0          51m  
kube-system   calico-node-wvtzf                                                  1/2     Running             0          51m  
kube-system   coredns-86c58d9df4-44mpn                                           0/1     ContainerCreating   0          40m  
kube-system   coredns-86c58d9df4-j5h7k                                           0/1     ContainerCreating   0          40m  
kube-system   etcd-ip-10-0-0-128.us-west-2.compute.internal                      1/1     Running             0          50m  
kube-system   kube-apiserver-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  
kube-system   kube-controller-manager-ip-10-0-0-128.us-west-2.compute.internal   1/1     Running             0          51m  
kube-system   kube-proxy-dqmb5                                                   1/1     Running             0          51m  
kube-system   kube-proxy-jk7tl                                                   1/1     Running             0          51m  
kube-system   kube-scheduler-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  


stdout from applying calico

应用印花布产生的标准如下:

$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml  

configmap/calico-config created  
service/calico-typha created  
deployment.apps/calico-typha created  
poddisruptionbudget.policy/calico-typha created  
daemonset.extensions/calico-node created\nserviceaccount/calico-node created  
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created  


How the cluster was created:

安装集群的命令是:

$ sudo -i 
# kubeadm init --kubernetes-version 1.13.1 --pod-network-cidr 192.168.0.0/16 | tee kubeadm-init.out
# exit 
$ sudo mkdir -p $HOME/.kube
$ sudo chown -R lnxcfg:lnxcfg /etc/kubernetes
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml  

这是在Amazon Linux 2主机上的AWS上运行的。

amazon-web-services kubernetes containers kubeadm project-calico
3个回答
0
投票

根据官方文档:(https://docs.projectcalico.org/v3.6/getting-started/kubernetes/)它看起来很好。它还包含用于激活的其他命令,还可以在首页上查看演示,其中显示了一些验证

NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
kube-system  calico-node-24h85                          2/2    Running  0         2m43s
kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s

0
投票

你能否告诉我你在哪里找到文献提到其他豆荚也会有牛排?

据我所知,在kube-system命名空间中,调度程序,api服务器,控制器和代理由本机kubernetes提供,因此命名约定中没有任何calico。

还有一件事,印花布适用于您为希望在k8s上运行的实际应用创建的POD,而不是kubernetes控制平面。

您是否面临集群创建的任何问题?那么问题会有所不同。

希望这可以帮助。


0
投票

这是正常和预期的行为,你只有几个以Calico开头的pod。它们是在初始化Calico或向集群添加新节点时创建的。

etcd-*kube-apiserver-*kube-controller-manager-*coredns-*kube-proxy-*kube-scheduler-*是强制性系统组件,pods不依赖于Calico。因此,名称将基于系统。

另外,正如@Jonathan_M已经写过的那样 - Calico不适用于K8s控制平面。仅限新创建的pod

您可以使用kubectl get pods --all-namespaces -o wide验证网络中的pod是否覆盖

我的例子:

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
default       my-nginx-76bf4969df-4fwgt               1/1     Running   0          14s   192.168.1.3   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-h9w9p               1/1     Running   0          14s   192.168.1.5   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-mh46v               1/1     Running   0          14s   192.168.1.4   kube-calico-2   <none>           <none>
kube-system   calico-node-2b8rx                       2/2     Running   0          70m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   calico-node-q5n2s                       2/2     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   coredns-86c58d9df4-q22lx                1/1     Running   0          74m   192.168.0.2   kube-calico-1   <none>           <none>
kube-system   coredns-86c58d9df4-q8nmt                1/1     Running   0          74m   192.168.1.2   kube-calico-2   <none>           <none>
kube-system   etcd-kube-calico-1                      1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-apiserver-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-controller-manager-kube-calico-1   1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-6zsxc                        1/1     Running   0          74m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-97xsf                        1/1     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   kube-scheduler-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>


kubectl get nodes --all-namespaces -o wide
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
kube-calico-1   Ready    master   84m   v1.13.4   10.132.0.12   <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-gcp   docker://18.9.2
kube-calico-2   Ready    <none>   70m   v1.13.4   10.132.0.13   <none>        Ubuntu 16.04.6 LTS   4.15.0-1023-gcp   docker://18.9.2

您可以看到K8s控制平面使用初始IP,而nginx部署pod已经使用Calico 192.168.0.0/16范围。

© www.soinside.com 2019 - 2024. All rights reserved.