本文介绍了Tensorflow在jupyter内设置CUDA_VISIBLE_DEVICES的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个GPU,并希望通过ipynb同时运行两个不同的网络,但是第一个笔记本总是分配这两个GPU。



使用CUDA_VISIBLE_DEVICES,我可以隐藏设备的python文件,但我不确定如何在笔记本电脑中这样做。



有没有将不同的GPU隐藏在同一台服务器上运行的笔记本电脑?

解决方案

您可以设置环境变量在笔记本中使用 os.environ 。在初始化TensorFlow以将TensorFlow限制到第一个GPU之前,请执行以下操作。

  import os 
os.environ [CUDA_DEVICE_ORDER ] =PCI_BUS_ID#查看问题#152
os.environ [CUDA_VISIBLE_DEVICES] =0

您可以仔细检查一下,您有正确的设备可见TF

  from tensorflow.python.client import device_lib 
打印设备_lib.list_local_devices()

我倾向于使用实用程序模块,如

  import notebook_util 
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf


I have two GPUs and would like to run two different networks via ipynb simultaneously, however the first notebook always allocates both GPUs.

Using CUDA_VISIBLE_DEVICES, I can hide devices for python files, however I am unsure of how to do so within a notebook.

Is there anyway to hide different GPUs in to notebooks running on the same server?

解决方案

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

You can double check that you have the correct devices visible to TF

from tensorflow.python.client import device_lib
print device_lib.list_local_devices()

I tend to use it from utility module like notebook_util

import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf

这篇关于Tensorflow在jupyter内设置CUDA_VISIBLE_DEVICES的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-27 16:05