本文介绍了为什么缩小部署规模似乎总是删除最新的Pod?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

(在开始之前,我正在Windows 10上使用minikube v27.)

我已经使用nginx'hello world'容器创建了一个部署,期望数量为2:

我实际上进入了"2小时"旧容器,并从欢迎消息中将index.html文件编辑为已损坏"-我想玩k8s看起来如果一个容器有故障"会是什么样子

如果我将此部署扩展到更多实例,然后再次缩减,则几乎可以预期k8会删除最旧的Pod,但始终会删除最新的pod:

我如何使其首先移除最旧的豆荚?

(理想情况下,我想说的是如果可以的话,将所有内容重新部署为在滚动部署中具有完全相同的版本/图像/所需数量")

解决方案

Pod删除首选项基于一系列有序检查,在下面的代码中定义:

https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737

总结-删除豆荚的优先级:

  • 未分配给节点,而未分配给节点
  • 处于待处理或未运行状态与正在运行
  • 尚未准备就绪,已准备就绪
  • 处于就绪状态的时间少于几秒钟
  • 具有更高的重启次数
  • 创建时间较旧的

这些检查不能直接配置.

根据规则,如果您可以使旧的Pod尚未准备好,或者使旧的Pod重新启动,则它将在缩小时被删除,然后再准备好尚未重启的新Pod. >

围绕用例来控制删除优先级的能力进行了讨论,其中主要涉及工作和服务混合的工作负载,在这里:

https://github.com/kubernetes/kubernetes/issues/45509

(Before I start, I'm using minikube v27 on Windows 10.)

I have created a deployment with the nginx 'hello world' container with a desired count of 2:

I actually went into the '2 hours' old pod and edited the index.html file from the welcome message to "broken" - I want to play with k8s to seem what it would look like if one pod was 'faulty'.

If I scale this deployment up to more instances and then scale down again, I almost expected k8s to remove the oldest pods, but it consistently removes the newest:

How do I make it remove the oldest pods first?

(Ideally, I'd like to be able to just say "redeploy everything as the exact same version/image/desired count in a rolling deployment" if that is possible)

解决方案

Pod deletion preference is based on a ordered series of checks, defined in code here:

https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737

Summarizing- precedence is given to delete pods:

  • that are unassigned to a node, vs assigned to a node
  • that are in pending or not running state, vs running
  • that are in not-ready, vs ready
  • that have been in ready state for fewer seconds
  • that have higher restart counts
  • that have newer vs older creation times

These checks are not directly configurable.

Given the rules, if you can make an old pod to be not ready, or cause an old pod to restart, it will be removed at scale down time before a newer pod that is ready and has not restarted.

There is discussion around use cases for the ability to control deletion priority, which mostly involve workloads that are a mix of job and service, here:

https://github.com/kubernetes/kubernetes/issues/45509

这篇关于为什么缩小部署规模似乎总是删除最新的Pod?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-18 19:49