本文介绍了为什么在卷积神经网络中可能具有较低的损失,但准确性却很低?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是机器学习的新手,目前正在尝试训练具有3个卷积层和1个完全连接层的卷积神经网络.我正在使用25%的辍学率和0.0001的学习率.我有6000个150x200的训练图像和13个输出类.我正在使用tensorflow.我注意到一种趋势,我的损失稳步下降,但我的准确度仅略有增加,然后又回落.我的训练图像是蓝线,而验证图像是橙色线. x轴为步长.

我想知道是否有我不了解的东西,或者可能是这种现象的可能原因?从我阅读的材料来看,我认为低损耗意味着高精度. 这是我的损失函数.

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
解决方案

这是因为损失准确性是两个完全不同的东西(至少在逻辑上是如此)! /p>

考虑一个将loss定义为的示例:

loss = (1-accuracy)

在这种情况下,当您尝试最小化loss时,accuracy会自动增加.

现在考虑另一个示例,其中将loss定义为:

loss = average(prediction_probabilities)

尽管它没有任何意义,但从技术上讲,它仍然是有效的损失函数,并且您仍在调整您的weights以最小化此类loss.

但是,正如您所看到的,在这种情况下,lossaccuracy之间没有关系,因此您不能期望两者同时增加/减少.

注意:Loss将始终最小化(因此,loss在每次迭代后都会减少)!

PS:请使用您要最小化的loss函数更新您的问题.

I am new to machine learning and am currently trying to train a convolutional neural net with 3 convolutional layers and 1 fully connected layer. I am using a dropout probability of 25% and a learning rate of 0.0001. I have 6000 150x200 training images and 13 output classes. I am using tensorflow. I am noticing a trend where my loss steadily decreases, but my accuracy increases only slightly and then drops back down again. My training images are the blue lines and my validation images are the orange lines. The x axis is steps.

I am wondering if there is a something I am not understanding or what could be possible causes of this phenomenon? From the material I have read, I assumed low loss meant high accuracy. Here is my loss function.

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
解决方案

That is because Loss and Accuracy are two totally different things (well at least logically)!

Consider an example where you have defined loss as:

loss = (1-accuracy)

In this case when you try to minimize loss, accuracy increases automatically.

Now consider another example where you define loss as:

loss = average(prediction_probabilities)

Though it does not make any sense, it technically is still a valid loss function and your weights are still tuned in order to minimize such loss.

But as you can see, in this case, there is no relation between loss and accuracy so you cannot expect both to increase/decrease at the same time.

Note: Loss will always be minimized (thus your loss decreases after each iteration)!

PS: Please update your question with the loss function you are trying to minimize.

这篇关于为什么在卷积神经网络中可能具有较低的损失,但准确性却很低?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-25 07:24