本文介绍了如何在Google Colab中使用TPU的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Google colab将TPU引入了Runtime Accelerator.我在官方Tensorflow github中找到了一个示例,如何使用TPU .但是该示例不适用于google-colaboratory.它停留在以下行:

Google colab brings TPUs in the Runtime Accelerator. I found an example, How to use TPU in Official Tensorflow github. But the example not worked on google-colaboratory. It stuck on following line:

tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)

当我在合作实验室上打印可用设备时,它会返回TPU加速器的[].有谁知道如何在colab上使用TPU?

When I print available devices on colab it return [] for TPU accelerator. Does anyone knows how to use TPU on colab?

推荐答案

以下是Colab特定的TPU示例: https://colab.research.google .com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb

Here's a Colab-specific TPU example:https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb

关键线是那些连接到TPU本身的线:

The key lines are those to connect to the TPU itself:

# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']

...

tpu_model = tf.contrib.tpu.keras_to_tpu_model(
training_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
    tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))

(与GPU不同,使用TPU需要与TPU工作人员建立显式连接.因此,您需要调整训练和推理定义以观察加速.)

(Unlike a GPU, use of the TPU requires an explicit connection to the TPU worker. So, you'll need to tweak your training and inference definition in order to observe a speedup.)

这篇关于如何在Google Colab中使用TPU的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-13 08:53