Kubernetes:kube-scheduler没有为Pod分配正确地给节点评分

问题描述 投票:-1回答:1

我正在与Rancher一起运行Kubernetes,并且看到kube-scheduler的行为异常。添加第三个节点后,我希望看到pod开始进行调度并分配给它。但是,尽管kube-scheduler几乎没有运行Pod,但它却以最低的分数为该新的第三个节点node3评分,我希望它获得最高的分数。

这里是来自Kube调度程序的日志:

scheduling_queue.go:815] About to try and schedule pod namespace1/pod1
scheduler.go:456] Attempting to schedule pod: namespace1/pod1
predicates.go:824] Schedule Pod namespace1/pod1 on Node node1 is allowed, Node is running only 94 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node1 is allowed, existing pods anti-affinity terms satisfied.
predicates.go:824] Schedule Pod namespace1/pod1 on Node node3 is allowed, Node is running only 4 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node3 is allowed, existing pods anti-affinity terms satisfied.
predicates.go:824] Schedule Pod namespace1/pod1 on Node node2 is allowed, Node is running only 95 out of 110 Pods.
predicates.go:1370] Schedule Pod namespace1/pod1 on Node node2 is allowed, existing pods anti-affinity terms satisfied.
resource_allocation.go:78] pod1 -> node1: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 7
resource_allocation.go:78] pod1 -> node1: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 3
resource_allocation.go:78] pod1 -> node3: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9
resource_allocation.go:78] pod1 -> node3: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9
resource_allocation.go:78] pod1 -> node2: BalancedResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 7
resource_allocation.go:78] pod1 -> node2: LeastResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 3
generic_scheduler.go:748] pod1_namespace1 -> node1: TaintTolerationPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node3: TaintTolerationPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node2: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] pod1 -> node1: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] pod1 -> node3: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] pod1 -> node2: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node1: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node3: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node2: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:748] pod1_namespace1 -> node1: NodeAffinityPriority, Score: (0)
generic_scheduler.go:748] pod1_namespace1 -> node3: NodeAffinityPriority, Score: (0)
generic_scheduler.go:748] pod1_namespace1 -> node2: NodeAffinityPriority, Score: (0)
 interpod_affinity.go:232] pod1 -> node1: InterPodAffinityPriority, Score: (0)
 interpod_affinity.go:232] pod1 -> node3: InterPodAffinityPriority, Score: (0)
interpod_affinity.go:232] pod1 -> node2: InterPodAffinityPriority, Score: (10)
generic_scheduler.go:803] Host node1 => Score 100040
generic_scheduler.go:803] Host node3 => Score 100038
generic_scheduler.go:803] Host node2 => Score 100050
scheduler_binder.go:256] AssumePodVolumes for pod "namespace1/pod1", node "node2"
scheduler_binder.go:266] AssumePodVolumes for pod "namespace1/pod1", node "node2": all PVCs bound and nothing to do
factory.go:727] Attempting to bind pod1 to node2
kubernetes rancher kube-scheduler rancher-rke
1个回答
0
投票

我可以从日志中得知,您的广告连播将始终安排在node2上,因为似乎您有某种PodAffinity会获得额外的10得分。转到50

奇怪的是,我为node3得分48,但似乎-10被卡在某个地方(总计38)。也许是由于亲和力,或者某些条目未在日志中显示,或者仅仅是调度程序进行计算时的一个错误。如果您想了解更多信息,则可能不得不深入研究kube-scheduler code

这是我所拥有的:

node1 7 + 3 + 10 + 10 + 10 = 40
node2 7 + 3 + 10 + 10 + 10 + 10 = 50
node3 9 + 9 + 10 + 10 + 10 = 48
© www.soinside.com 2019 - 2024. All rights reserved.