本文介绍了volatile 被移除,现在使用 with.torch.no_grad() instread 无效的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的火炬程序此时停止了我想我不能使用 volatile=True
我应该如何更改它以及停止的原因是什么?
以及我应该如何更改此代码?

images = Variable(images.cuda())目标 = [Variable(ann.cuda(), volatile=True) 用于目标中的 ann]

train.py:166: UserWarning: volatile 被移除,现在没有效果.使用 with torch.no_grad(): 代替.

解决方案

Variable 不做任何事情,并且自 pytorch 0.4.0 起已被弃用.它的功能与 torch.Tensor 类合并.当时,volatile 标志用于禁用任何涉及 volatile 变量的操作的计算图的构建.较新的 pytorch 已更改此行为,改为使用 和 torch.no_grad(): 禁止为 with 语句主体中的任何内容构建计算图.

您应该更改的内容首先取决于您使用 volatile 的原因.无论如何,尽管您可能想使用

images = images.cuda()目标 = [ann.cuda() 用于目标中的 ann]

在训练期间,您将使用类似以下内容来创建计算图(假设模型、标准和优化器的标准变量名称).

output = model(images)损失=标准(图像,目标)optimizer.zero_grad()损失.向后()优化器.step()

由于您不需要在评估期间执行反向传播,您可以使用 with torch.no_grad(): 来禁用计算图的创建,从而减少内存占用并加快计算速度.

with torch.no_grad():输出 = 模型(图像)损失=标准(图像,目标)

my torch program stopped at this point I guess i can not use volatile=True
how should I change it and what is the reason to stop?
and How should I change this code?

images = Variable(images.cuda())
targets = [Variable(ann.cuda(), volatile=True) for ann in targets]
解决方案

Variable doesn't do anything and has been deprecated since pytorch 0.4.0. Its functionality was merged with the torch.Tensor class. Back then the volatile flag was used to disable the construction of the computation graph for any operation which the volatile variable was involved in. Newer pytorch has changed this behavior to instead use with torch.no_grad(): to disable construction of the computation graph for anything in the body of the with statement.

What you should change will depend on your reason for using volatile in the first place. No matter what though you probably want to use

images = images.cuda()
targets = [ann.cuda() for ann in targets]

During training you would use something like the following so that the computation graph is created (assuming standard variable names for model, criterion, and optimizer).

output = model(images)
loss = criterion(images, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()

Since you don't need to perform backpropagation during evaluation you would use with torch.no_grad(): to disable the creation of the computation graph which reduces the memory footprint and speeds up computation.

with torch.no_grad():
    output = model(images)
    loss = criterion(images, targets)

这篇关于volatile 被移除,现在使用 with.torch.no_grad() instread 无效的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-30 00:33