本文介绍了在 k8s pod 副本缩减后,k8s 如何选择终止哪个?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经将集群中的 pod 副本增加到大约 50 个,观察它横向扩展,然后将副本降回 1.事实证明我已经禁用了一个节点的缩减.我注意到 k8s 会将剩余的副本留在该节点上.但是,当不存在防止缩小的注释时,我已经看到它删除了该节点.因此,k8s 以某种方式根据节点的某种知识做出决策,或者至少最旧的 POD 是给定节点上的那个.或者其他的东西.

I've raised the pod replicas to something like 50 in a cluster, watched it scale out, and then dropped the replicas back to 1. As it turns out I've disabled scale-down for one node. I've noticed that k8s will leave the remaining replica on that node. However, I've seen it remove that node when the annotation to prevent scale-down is not present. So somehow k8s makes decisions based on some kind of knowledge of nodes, or at least that the oldest POD is the one on the given node. Or something else altogether.

在 k8s pod 副本缩减后,k8s 如何选择终止哪个?

After a scale down of k8s pod replicas how does k8s choose which to terminate?

推荐答案

粗略地说,它试图让事物均匀地分布在节点上.您可以在 https://github.com/kubernetes/blob/edbbb6a89f9583f18051218b1adef1def1b777ae/pkg/controller/replicaset/replicaset///github.com/kubernetes/kubernetes/blob/edbbb6a89f9583f18051218b1adef1def1b777ae/pkg/controller/replicaset/replica_set.go#L801-L827 如果计数相同,则它实际上是随机的.

Roughly speaking it tries to keep things spread out over the nodes evenly. You can find the code in https://github.com/kubernetes/kubernetes/blob/edbbb6a89f9583f18051218b1adef1def1b777ae/pkg/controller/replicaset/replica_set.go#L801-L827 If the counts are the same, it's effectively random though.

这篇关于在 k8s pod 副本缩减后,k8s 如何选择终止哪个?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-19 11:17