本文介绍了在Kubernetes 1.10中调度GPU的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我按照说明安装了 nvidia-docker 2 ,然后通过kubeadm安装了kubernetes 1.10. (在rhel7上):我做了以下事情:

i followed the instructions to install nvidia-docker 2 and then installed kubernetes 1.10 via kubeadm (on rhel7): i did the following:

curl -s -L https://nvidia.github.io/nvidia-docker/rhel7.4/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
yum update

yum install docker

yum install -y nvidia-container-runtime-hook

yum install --downloadonly --downloaddir=/tmp/  nvidia-docker2-2.0.3-1.docker1.13.1.noarch nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64
rpm -Uhv --replacefiles /tmp/nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64.rpm /tmp/nvidia-docker2-2.0.3-1.docker1.13.1.noarch.rpm

mkdir -p  /etc/systemd/system/docker.service.d/
cat <<EOF > /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd-current --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES
EOF

cat <<EOF > /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF

systemctl restart docker

docker run --rm nvidia/cuda nvidia-smi
# success!

我什至可以安排GPU容器,并查看容器内的所有GPU.

i can even schedule gpu'd containers and see all gpus from within the container.

但是,当我使用以下方法部署容器时:

however, when i deploy a container with:

resources:
    limits:
        nvidia.com/gpu: 1

豆荚保持为:

jupyter         jupyterlab-gpu                 0/1       Pending     0          1m        <none>           <none>

描述节目:

Name:         jupyterlab-gpu
Namespace:    jupyter
Node:         <none>
Labels:       app=jupyterhub
              component=singleuser-server
              heritage=jupyterhub
              hub.jupyter.org/username=me
Annotations:  <none>
Status:       Pending
IP:
Containers:
  notebook:
    Image:      slaclab/slac-jupyterlab-gpu
    Port:       8888/TCP
    Host Port:  0/TCP
    Limits:
      cpu:             2
      memory:          2147483648
      nvidia.com/gpu:  1
    Requests:
      cpu:             500m
      memory:          536870912
      nvidia.com/gpu:  1
    Environment:
      JUPYTERHUB_USER:                me
      JUPYTERLAB_IDLE_TIMEOUT:        43200
      JPY_API_TOKEN:                  1fca7b3d716e4d54a98d8054d17b16fb
      CPU_LIMIT:                      2.0
      JUPYTERHUB_SERVICE_PREFIX:      /user/me/
      MEM_GUARANTEE:                  536870912
      JUPYTERHUB_API_URL:             http://10.103.19.59:8081/hub/api
      JUPYTERHUB_OAUTH_CALLBACK_URL:  /user/me/oauth_callback
      JUPYTERHUB_BASE_URL:            /
      JUPYTERHUB_API_TOKEN:           1fca7b3d716e4d54a98d8054d17b16fb
      CPU_GUARANTEE:                  0.5
      JUPYTERHUB_CLIENT_ID:           user-me
      MEM_LIMIT:                      2147483648
      JUPYTERHUB_HOST:
    Mounts:
      /home/ from generic-user-home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from no-api-access-please (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  generic-user-home:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  generic-user-home
    ReadOnly:   false
  no-api-access-please:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
QoS Class:       Burstable
Node-Selectors:  group=gpu
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  14s (x13 over 2m)  default-scheduler  0/8 nodes are available: 1 node(s) were not ready, 6 node(s) didn't match node selector, 7 Insufficient nvidia.com/gpu.

我能够将容器调度到没有gpu资源限制的节点上.

i am able to schedule containers to the node without the gpu resource limit without issues.

有没有一种方法可以验证kubectl(?)是否可以看到" GPU?

is there a way i can validate that kubectl (?) can 'see' the gpus?

推荐答案

您可以通过kubectl get nodes -oyaml查看节点的详细信息,nvidia.com/gpu资源将在status.allocatablestatus.capacity下列出,以及cpu和内存

You can view nodes detail via kubectl get nodes -oyaml, nvidia.com/gpu resources will be listed under status.allocatable and status.capacity alongside with cpu and memory

这篇关于在Kubernetes 1.10中调度GPU的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-18 18:57