有什么问题可以加作者微信讨论,cyx645016617 上千人的粉丝群已经成立,氛围超好。为大家提供一个遇到问题有可能得到答案的平台。

0 概述

这一篇文论在我看来,是CVPR 2015年 HED网络(holistically-nested edge detection)的一个改进,RCF的论文中也基本上和HED网络处处对比

在上一篇文章中,我们依稀记得HED模型有这样一个图:
轮廓检测论文解读 | Richer Convolutional Features for Edge Detection | CVPR | 2017-LMLPHP

其中有HED的五个side output的特征图,下图是RCF论文中的图:

轮廓检测论文解读 | Richer Convolutional Features for Edge Detection | CVPR | 2017-LMLPHP

我们从这两个图的区别中来认识RCF相比HED的改进,大家可以看一看图。

揭晓答案:

  • HED是豹子的图片,但是RCF是两只小鸟的图片(手动狗头)
  • HED中的是side output的输出的特征图,而RCF中是conv3_1,conv3_2,这意味着RCF似乎把每一个卷积之后的输出的特征图都作为了一个side output

没错,HED选取了5个side output,每一个side output都是池化层之前的卷积层输出的特征图;而RCF则对每一次卷积的输出特征图都作为side output,换句话说 最终的side output中,同一尺寸的输出可能不止一个

如果还没有理解,请看下面章节,模型结构。

1 模型结构

RCF的backbone是VGG模型:
轮廓检测论文解读 | Richer Convolutional Features for Edge Detection | CVPR | 2017-LMLPHP

从图中可以看到:

  • 主干网络上分成state1到5,stage1有两个卷积层,stage2有两个卷积层,总共有13个卷积层,每一次卷积输出的图像,再额外接入一个1x1的卷积,来降低通道数,所以可以看到,图中有大量的21通道的卷积层。
  • 同一个stage的21通道的特征图经过通道拼接,变成42通道或者是63通道的特征图,然后再经过一个1x1的卷积层,来把通道数降低成1,再进过sigmoid层,输出的结果就是一个RCF模型中的side output了

2 损失函数

这里的损失函数其实和HED来说类似:

轮廓检测论文解读 | Richer Convolutional Features for Edge Detection | CVPR | 2017-LMLPHP

首先整体来看,损失函数依然使用二值交叉熵

轮廓检测论文解读 | Richer Convolutional Features for Edge Detection | CVPR | 2017-LMLPHP

其中\(|Y^-|\) 表示 negative的像素值,\(|Y^+|\)表示positive的像素值。一般来说轮廓检测任务中,positive的样本应该是较少的,因此\(\alpha\)的值较小,因此损失函数中第一行,y=0也就是计算非轮廓部分的损失的时候,就会增加一个较小的权重,来避免类别不均衡的问题。

损失函数中有两个常数,一个是\(\lambda\),这个就是权重常数,默认为1.1;另外一个是\(\eta\)。论文中的描述为:

大意就是:一般对数据集进行标注,是有多个人来完成的。不同的人虽然有不同的意识,但是他们对于同一个图片的轮廓标注往往是具有一致性。RCF网络最后的输出,是由5个side output融合产生的,因此你这个RCF的输出也应该把大于\(\eta\)的考虑为positive,然后小于\(\eta\)的考虑为negative。 其实这一点我自己在复现的时候并没有考虑,我看网上的github和官方的代码中,都没有考虑这个,都是直接交叉熵。。。我这就也就多此一举的讲解一下论文中的这个\(\eta\)的含义

3 pytorch部分代码

对于这个RCF论文来说,关键就是一个模型的构建,另外一个就是损失函数的构建,这里放出这两部分的代码,来帮助大家更好的理解上面的内容。

3.1 模型部分

下面的代码在上采样部分的写法比较老旧,因为这个网上找来的pytorch版本估计比较老,当时还没有Conv2DTrans这样的函数封装,但是不妨碍大家通过代码来学习RCF。

class RCF(nn.Module):
    def __init__(self):
        super(RCF, self).__init__()
        #lr 1 2 decay 1 0
        self.conv1_1 = nn.Conv2d(3, 64, 3, padding=1)
        self.conv1_2 = nn.Conv2d(64, 64, 3, padding=1)

        self.conv2_1 = nn.Conv2d(64, 128, 3, padding=1)
        self.conv2_2 = nn.Conv2d(128, 128, 3, padding=1)

        self.conv3_1 = nn.Conv2d(128, 256, 3, padding=1)
        self.conv3_2 = nn.Conv2d(256, 256, 3, padding=1)
        self.conv3_3 = nn.Conv2d(256, 256, 3, padding=1)

        self.conv4_1 = nn.Conv2d(256, 512, 3, padding=1)
        self.conv4_2 = nn.Conv2d(512, 512, 3, padding=1)
        self.conv4_3 = nn.Conv2d(512, 512, 3, padding=1)

        self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3,
                        stride=1, padding=2, dilation=2)
        self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3,
                        stride=1, padding=2, dilation=2)
        self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3,
                        stride=1, padding=2, dilation=2)
        self.relu = nn.ReLU()
        self.maxpool = nn.MaxPool2d(2, stride=2, ceil_mode=True)
        self.maxpool4 = nn.MaxPool2d(2, stride=1, ceil_mode=True)


        #lr 0.1 0.2 decay 1 0
        self.conv1_1_down = nn.Conv2d(64, 21, 1, padding=0)
        self.conv1_2_down = nn.Conv2d(64, 21, 1, padding=0)

        self.conv2_1_down = nn.Conv2d(128, 21, 1, padding=0)
        self.conv2_2_down = nn.Conv2d(128, 21, 1, padding=0)

        self.conv3_1_down = nn.Conv2d(256, 21, 1, padding=0)
        self.conv3_2_down = nn.Conv2d(256, 21, 1, padding=0)
        self.conv3_3_down = nn.Conv2d(256, 21, 1, padding=0)

        self.conv4_1_down = nn.Conv2d(512, 21, 1, padding=0)
        self.conv4_2_down = nn.Conv2d(512, 21, 1, padding=0)
        self.conv4_3_down = nn.Conv2d(512, 21, 1, padding=0)

        self.conv5_1_down = nn.Conv2d(512, 21, 1, padding=0)
        self.conv5_2_down = nn.Conv2d(512, 21, 1, padding=0)
        self.conv5_3_down = nn.Conv2d(512, 21, 1, padding=0)

        #lr 0.01 0.02 decay 1 0
        self.score_dsn1 = nn.Conv2d(21, 1, 1)
        self.score_dsn2 = nn.Conv2d(21, 1, 1)
        self.score_dsn3 = nn.Conv2d(21, 1, 1)
        self.score_dsn4 = nn.Conv2d(21, 1, 1)
        self.score_dsn5 = nn.Conv2d(21, 1, 1)
        #lr 0.001 0.002 decay 1 0
        self.score_final = nn.Conv2d(5, 1, 1)

    def forward(self, x):
        # VGG
        img_H, img_W = x.shape[2], x.shape[3]
        conv1_1 = self.relu(self.conv1_1(x))
        conv1_2 = self.relu(self.conv1_2(conv1_1))
        pool1   = self.maxpool(conv1_2)

        conv2_1 = self.relu(self.conv2_1(pool1))
        conv2_2 = self.relu(self.conv2_2(conv2_1))
        pool2   = self.maxpool(conv2_2)

        conv3_1 = self.relu(self.conv3_1(pool2))
        conv3_2 = self.relu(self.conv3_2(conv3_1))
        conv3_3 = self.relu(self.conv3_3(conv3_2))
        pool3   = self.maxpool(conv3_3)

        conv4_1 = self.relu(self.conv4_1(pool3))
        conv4_2 = self.relu(self.conv4_2(conv4_1))
        conv4_3 = self.relu(self.conv4_3(conv4_2))
        pool4   = self.maxpool4(conv4_3)

        conv5_1 = self.relu(self.conv5_1(pool4))
        conv5_2 = self.relu(self.conv5_2(conv5_1))
        conv5_3 = self.relu(self.conv5_3(conv5_2))

        conv1_1_down = self.conv1_1_down(conv1_1)
        conv1_2_down = self.conv1_2_down(conv1_2)
        conv2_1_down = self.conv2_1_down(conv2_1)
        conv2_2_down = self.conv2_2_down(conv2_2)
        conv3_1_down = self.conv3_1_down(conv3_1)
        conv3_2_down = self.conv3_2_down(conv3_2)
        conv3_3_down = self.conv3_3_down(conv3_3)
        conv4_1_down = self.conv4_1_down(conv4_1)
        conv4_2_down = self.conv4_2_down(conv4_2)
        conv4_3_down = self.conv4_3_down(conv4_3)
        conv5_1_down = self.conv5_1_down(conv5_1)
        conv5_2_down = self.conv5_2_down(conv5_2)
        conv5_3_down = self.conv5_3_down(conv5_3)

        so1_out = self.score_dsn1(conv1_1_down + conv1_2_down)
        so2_out = self.score_dsn2(conv2_1_down + conv2_2_down)
        so3_out = self.score_dsn3(conv3_1_down + conv3_2_down + conv3_3_down)
        so4_out = self.score_dsn4(conv4_1_down + conv4_2_down + conv4_3_down)
        so5_out = self.score_dsn5(conv5_1_down + conv5_2_down + conv5_3_down)
        ## transpose and crop way
        weight_deconv2 =  make_bilinear_weights(4, 1).cuda()
        weight_deconv3 =  make_bilinear_weights(8, 1).cuda()
        weight_deconv4 =  make_bilinear_weights(16, 1).cuda()
        weight_deconv5 =  make_bilinear_weights(32, 1).cuda()

        upsample2 = torch.nn.functional.conv_transpose2d(so2_out, weight_deconv2, stride=2)
        upsample3 = torch.nn.functional.conv_transpose2d(so3_out, weight_deconv3, stride=4)
        upsample4 = torch.nn.functional.conv_transpose2d(so4_out, weight_deconv4, stride=8)
        upsample5 = torch.nn.functional.conv_transpose2d(so5_out, weight_deconv5, stride=8)
        ### center crop
        so1 = crop(so1_out, img_H, img_W)
        so2 = crop(upsample2, img_H, img_W)
        so3 = crop(upsample3, img_H, img_W)
        so4 = crop(upsample4, img_H, img_W)
        so5 = crop(upsample5, img_H, img_W)

        fusecat = torch.cat((so1, so2, so3, so4, so5), dim=1)
        fuse = self.score_final(fusecat)
        results = [so1, so2, so3, so4, so5, fuse]
        results = [torch.sigmoid(r) for r in results]
        return results

3.2 损失函数部分

def cross_entropy_loss_RCF(prediction, label):
    label = label.long()
    mask = label.float()
    num_positive = torch.sum((mask==1).float()).float()
    num_negative = torch.sum((mask==0).float()).float()

    mask[mask == 1] = 1.0 * num_negative / (num_positive + num_negative)
    mask[mask == 0] = 1.1 * num_positive / (num_positive + num_negative)
    mask[mask == 2] = 0
    cost = torch.nn.functional.binary_cross_entropy(
            prediction.float(),label.float(), weight=mask, reduce=False)
    return torch.sum(cost)

参考文章:

  1. https://blog.csdn.net/a8039974/article/details/85696282
  2. https://gitee.com/HEART1/RCF-pytorch/blob/master/functions.py
  3. https://openaccess.thecvf.com/content_cvpr_2017/papers/Liu_Richer_Convolutional_Features_CVPR_2017_paper.pdf
12-16 14:00