本文介绍了如何在节点之间分配部署?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Kubernetes部署,看起来像这样(用"...."替换名称和其他名称):

I have a Kubernetes deployment that looks something like this (replaced names and other things with '....'):

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
    kubernetes.io/change-cause: kubectl replace deployment ....
      -f - --record
  creationTimestamp: 2016-08-20T03:46:28Z
  generation: 8
  labels:
    app: ....
  name: ....
  namespace: default
  resourceVersion: "369219"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/....
  uid: aceb2a9e-6688-11e6-b5fc-42010af000c1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ....
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ....
    spec:
      containers:
      - image: gcr.io/..../....:0.2.1
        imagePullPolicy: IfNotPresent
        name: ....
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          requests:
            cpu: "0"
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  observedGeneration: 8
  replicas: 2
  updatedReplicas: 2

我观察到的问题是Kubernetes将两个副本(在我要求两个的部署中)都放在同一节点上.如果该节点出现故障,我将丢失两个容器,服务将脱机.

The problem I'm observing is that Kubernetes places both replicas (in the deployment I've asked for two) on the same node. If that node goes down, I lose both containers and the service goes offline.

我希望Kubernetes要做的是确保它不会在容器具有相同类型的同一节点上将容器加倍-这只会消耗资源并且不会提供任何冗余.我浏览了有关部署,副本集,节点等的文档,但是找不到任何可以让我告诉Kubernetes这样做的选项.

What I want Kubernetes to do is to ensure that it doesn't double up containers on the same node where the containers are the same type - this only consumes resources and doesn't provide any redundancy. I've looked through the documentation on deployments, replica sets, nodes etc. but I couldn't find any options that would let me tell Kubernetes to do this.

有没有办法告诉Kubernetes我想要一个容器的节点之间有多少冗余?

Is there a way to tell Kubernetes how much redundancy across nodes I want for a container?

编辑:我不确定标签是否可以使用;标签限制了节点的运行位置,以便它可以访问本地资源(SSD)等.我要做的就是确保如果节点脱机,则不会造成停机.

I'm not sure labels will work; labels constrain where a node will run so that it has access to local resources (SSDs) etc. All I want to do is ensure no downtime if a node goes offline.

推荐答案

我认为您正在寻找亲和力/反亲和力选择器".

I think you're looking for the Affinity/Anti-Affinity Selectors.

相似性用于共置Pod,因此我想让我的网站尝试在与我的缓存相同的主机上进行调度.另一方面,反亲和力则相反,不要按照一组规则在主机上进行调度.

Affinity is for co-locating pods, so I want my website to try and schedule on the same host as my cache for example. On the other hand, Anti-affinity is the opposite, don't schedule on a host as per a set of rules.

因此,对于您正在做的事情,我将仔细研究以下两个链接: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-same-same-node

So for what you're doing, I would take a closer look at this two links:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node

https://kubernetes.io/docs/tutorials /stateful-application/zookeeper/#tolerating-node-failure

这篇关于如何在节点之间分配部署?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-30 09:01