本文介绍了GKE Kubernetes持久卷的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试为我的rethinkdb服务器使用永久卷.但是我得到了这个错误:

I try to use a persistent volume for my rethinkdb server. But I got this error:

Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment-
Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x

我认为Kubernetes部署了一个新节点而没有删除旧节点,因此它不能在两者之间共享文件,因为我的pvc是ReadWriteOnce.该永久卷必须以自动方式创建,所以我不能使用永久磁盘对其进行格式化...

I think that Kubernetes deploy a new node without removed the old one so it can't share le volume between both because my pvc is ReadWriteOnce. This persistent volume must be create in an automatic way, so I can't use persistent disk, format it ...

我的配置:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: default
  name: rethinkdb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi



apiVersion: apps/v1beta1
kind: Deployment
metadata:
  namespace: default
  labels:
    db: rethinkdb
    role: admin
  name: rethinkdb-server-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rethinkdb-server
  template:
    metadata:
      name: rethinkdb-server-pod
      labels:
        app: rethinkdb-server
    spec:
      containers:
      - name: rethinkdb-server
        image: gcr.io/$PROJECT_ID/rethinkdb-server:$LAST_VERSION
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 8080
          name: admin-port
        - containerPort: 28015
          name: driver-port
        - containerPort: 29015
          name: cluster-port
        volumeMounts:
        - mountPath: /data/rethinkdb_data
          name: rethinkdb-storage
      volumes:
       - name: rethinkdb-storage
         persistentVolumeClaim:
          claimName: rethinkdb-pvc

您如何管理?

推荐答案

我看到您已经在deployment中添加了PersistentVolumeClaim.我还看到您正在尝试扩展节点池.

I see that you’ve added the PersistentVolumeClaim within a deployment. I also see that you are trying to scale the node pool.

A PersistentVolumeClaim将在部署中工作,但前提是您不扩展deployment.这就是显示该错误消息的原因.您看到的错误是复制新容器时,现有容器已在使用该卷.

A PersistentVolumeClaim will work on a deployment, but only if you are not scaling the deployment. This is why that error message showed up. The error that you are seeing says that that volume is already in use by an existing pod when a new pod is replicated.

由于要尝试缩放deployment,因此其他副本将尝试装入并使用相同的卷.

Because you are trying to scale the deployment, other replicas will try to mount and use the same volume.

解决方案:在statefulset对象而不是deployment中部署PersistentVolumeClaim.可以在中找到有关如何部署statefulset的说明.文章.使用 statefulset ,您将可以附加一个将PersistentVolumeClaim声明为Pod,然后缩放节点池.

Solution: Deploy the PersistentVolumeClaim in a statefulset object, not a deployment. Instructions on how to deploy a statefulset can be found in this article. With a statefulset, you will be able to attach a PersistentVolumeClaim to a pod, then scale the node pool.

这篇关于GKE Kubernetes持久卷的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-28 03:47