为什么在卷积神经网络中可以有低损失

为什么在卷积神经网络中可以有低损失

本文介绍了为什么在卷积神经网络中可以有低损失,但精度也非常低?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是机器学习的新手,目前正在尝试训练具有 3 个卷积层和 1 个全连接层的卷积神经网络.我使用的辍学概率为 25%,学习率为 0.0001.我有 6000 张 150x200 的训练图像和 13 个输出类.我正在使用张量流.我注意到一个趋势,我的损失稳步下降,但我的准确度仅略有增加,然后又回落.我的训练图像是蓝线,我的验证图像是橙色线.x 轴是步长.

I am new to machine learning and am currently trying to train a convolutional neural net with 3 convolutional layers and 1 fully connected layer. I am using a dropout probability of 25% and a learning rate of 0.0001. I have 6000 150x200 training images and 13 output classes. I am using tensorflow. I am noticing a trend where my loss steadily decreases, but my accuracy increases only slightly and then drops back down again. My training images are the blue lines and my validation images are the orange lines. The x axis is steps.

我想知道是否有我不理解的地方,或者这种现象的可能原因是什么?从我读过的材料来看,我认为低损耗意味着高精度.这是我的损失函数.

I am wondering if there is a something I am not understanding or what could be possible causes of this phenomenon? From the material I have read, I assumed low loss meant high accuracy. Here is my loss function.

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))

推荐答案

那是因为损失准确性是两个完全不同的东西(至少在逻辑上是这样)!

That is because Loss and Accuracy are two totally different things (well at least logically)!

考虑一个示例,其中您将 loss 定义为:

Consider an example where you have defined loss as:

loss = (1-accuracy)

在这种情况下,当您尝试最小化 loss 时,accuracy 会自动增加.

In this case when you try to minimize loss, accuracy increases automatically.

现在考虑另一个示例,您将 loss 定义为:

Now consider another example where you define loss as:

loss = average(prediction_probabilities)

尽管它没有任何意义,但从技术上讲,它仍然是一个有效的损失函数,并且您的 weights 仍在调整以最小化此类 loss.

Though it does not make any sense, it technically is still a valid loss function and your weights are still tuned in order to minimize such loss.

但正如您所看到的,在这种情况下,lossaccuracy 之间没有关系,因此您不能期望两者同时增加/减少.

But as you can see, in this case, there is no relation between loss and accuracy so you cannot expect both to increase/decrease at the same time.

注意:Loss 将始终最小化(因此您的 loss 在每次迭代后都会减少)!

Note: Loss will always be minimized (thus your loss decreases after each iteration)!

PS:请使用您试图最小化的 loss 函数更新您的问题.

PS: Please update your question with the loss function you are trying to minimize.

这篇关于为什么在卷积神经网络中可以有低损失,但精度也非常低?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-27 19:31