问题描述
我正在尝试构建可以使用GPUS运行的docker映像,这是我的情况:
I'm trying to build a docker image that can run using GPUS, this my situation:
我有python 3.6,我从图像nvidia/cuda:10.0-cudnn7-devel开始.火炬看不到我的GPU.
I have python 3.6 and I am starting from image nvidia/cuda:10.0-cudnn7-devel.Torch does not see my GPUs.
nvidia-smi
也不起作用,返回错误:
nvidia-smi
is not working too, returning error:
> Failed to initialize NVML: Unknown Error
> The command '/bin/sh -c nvidia-smi' returned a non-zero code: 255
我安装了nvidia工具包和nvidia-smi,
I installed nvidia toolkit and nvidia-smi with
RUN apt install nvidia-cuda-toolkit -y
RUN apt-get install nvidia-utils-410 -y
推荐答案
我发现问题是您在构建(RUN nvidia-smi)时无法使用nvidia-smi.在构建过程中,与GPU的可用性相关的任何检查都将无法进行.
I figured out the problem is you can't use nvidia-smi during building (RUN nvidia-smi). Any check related to the avaiability of the GPUs during building won't work.
使用CMD bin/bash并键入命令 python3 -c'import torch;print(torch.cuda.is_avaiable())'
,我终于得到了True.我也删除了
Using CMD bin/bash and typing the command python3 -c 'import torch; print(torch.cuda.is_avaiable())'
, I finally get True.I also removed
RUN apt install nvidia-cuda-toolkit -y
RUN apt-get install nvidia-utils-410 -y
@RobertCrovella的建议
as suggested from @RobertCrovella
这篇关于torch.cuda.is_avaiable返回False,而nvidia-smi不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!