本文介绍了避免kubernetes调度程序在kubernetes集群的单个节点中运行所有Pod的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有4个节点和一个主节点的kubernetes集群.我正在尝试在所有节点上运行5个Nginx Pod.当前,调度程序有时在一台计算机上运行所有的Pod,有时在另一台计算机上运行.

I have one kubernetes cluster with 4 nodes and one master. I am trying to run 5 nginx pod in all nodes. Currently sometimes the scheduler runs all the pods in one machine and sometimes in different machine.

如果我的节点出现故障并且我所有的Pod都在同一个节点上运行,会发生什么?我们需要避免这种情况.

What happens if my node goes down and all my pods were running in same node? We need to avoid this.

如何强制调度程序以循环方式在节点上运行Pod,这样,如果任何节点出现故障,则至少一个节点应使NGINX Pod处于运行模式.

How to enforce scheduler to run pods on the nodes in round-robin fashion, so that if any node goes down then at at least one node should have NGINX pod in running mode.

这可能吗?如果可能的话,我们如何实现这种情况?

Is this possible or not? If possible, how can we achieve this scenario?

推荐答案

使用podAntiAfinity

参考资料: Kubernetes实战第16章.高级调度

Use podAntiAfinity

Reference: Kubernetes in Action Chapter 16. Advanced scheduling

具有 requiredDuringSchedulingIgnoredDuringExecution 的podAntiAfinity可用于阻止将同一pod调度到相同的主机名.如果希望放宽约束,请使用 preferredDuringSchedulingIgnoredDuringExecution .

The podAntiAfinity with requiredDuringSchedulingIgnoredDuringExecution can be used to prevent the same pod from being scheduled to the same hostname. If prefer more relaxed constraint, use preferredDuringSchedulingIgnoredDuringExecution.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:   <---- hard requirement not to schedule "nginx" pod if already one scheduled.
          - topologyKey: kubernetes.io/hostname     <---- Anti affinity scope is host
            labelSelector:
              matchLabels:
                app: nginx
      container:
        image: nginx:latest

Kubelet -最大吊舱

您可以在kubelet配置中指定节点的最大Pod数量,这样在节点关闭的情况下,它将防止K8S饱和来自发生故障的节点的Pod的另一个节点.

Kubelet --max-pods

You can specify the max number of pods for a node in kubelet configuration so that in the scenario of node(s) down, it will prevent K8S from saturating another nodes with pods from the failed node.

这篇关于避免kubernetes调度程序在kubernetes集群的单个节点中运行所有Pod的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-19 10:59