​一、算法原理

      孪生神经网络( Siamese neural network)是一种深度学习网络,它使用两个或多个具有相同架构、共享相同参数和权重的相同子网。孪生网络通常用于寻找两个可比较事物之间的关系的任务。孪生网络的一些常见应用包括面部识别、签名验证或释义识别。孪生神经网络是基于两个人工神经网络建立的耦合架构,孪生神经网络以两个样本为输入,输出其嵌入高维空间的表征,以比较两个样本的相似程度,狭义的孪生神经网络由两个结构相同,且权重共享的神经网络拼接而成,网络框架如下图所示。

孪生神经网络MATLAB实战[含源码]-LMLPHP

        广义的孪生神经网络(又称pseudo-siamese network,伪孪生神经网络),可由两个任意的神经网络拼接而成,可由卷积神经网络、循环神经网络等组成,网络框架如下图所示。

孪生神经网络MATLAB实战[含源码]-LMLPHP

    简单来说,孪生神经网络就是衡量两个输入的相似程度。孪生神经网络有两个输入,将两个输入输入到两个神经网络,这两个神经网络分别将输入映射到新的空间,形成输入在新的空间中的表示。通过Loss的计算,评价两个输入的相似度。孪生神经网络在这些任务中表现良好,因为它们的共享权重意味着在训练过程中需要学习的参数更少,并且它们可以用相对较少的训练数据产生良好的结果。

二、代码实战

%% matlab学习之家
%% 孪生神经网络
clc
clear
%% 读取训练集
dataFolderTrain = "D:\S\孪生神经网络\images_background"; %% 更换路径
imdsTrain = imageDatastore(dataFolderTrain, ...
    IncludeSubfolders=true, ...
    LabelSource="none");
​
files = imdsTrain.Files;
parts = split(files,filesep);
labels = join(parts(:,(end-2):(end-1)),"-");
imdsTrain.Labels = categorical(labels);
​
%% 显示图片
idx = randperm(numel(imdsTrain.Files),8);
​
for i = 1:numel(idx)
    subplot(4,2,i)
    imshow(readimage(imdsTrain,idx(i)))
    title(imdsTrain.Labels(idx(i)),Interpreter="none");
end
​
batchSize = 10;
[pairImage1,pairImage2,pairLabel] = getTwinBatch(imdsTrain,batchSize);
​
for i = 1:batchSize
    if pairLabel(i) == 1
        s = "similar";
    else
        s = "dissimilar";
    end
    subplot(2,5,i)
    imshow([pairImage1(:,:,:,i) pairImage2(:,:,:,i)]);
    title(s)
end
%% 定义孪生神经网络架构
layers = [
    imageInputLayer([105 105 1],Normalization="none")
    convolution2dLayer(10,64,WeightsInitializer="narrow-normal",BiasInitializer="narrow-normal")
    reluLayer
    maxPooling2dLayer(2,Stride=2)
    convolution2dLayer(7,128,WeightsInitializer="narrow-normal",BiasInitializer="narrow-normal")
    reluLayer
    maxPooling2dLayer(2,Stride=2)
    convolution2dLayer(4,128,WeightsInitializer="narrow-normal",BiasInitializer="narrow-normal")
    reluLayer
    maxPooling2dLayer(2,Stride=2)
    convolution2dLayer(5,256,WeightsInitializer="narrow-normal",BiasInitializer="narrow-normal")
    reluLayer
    fullyConnectedLayer(4096,WeightsInitializer="narrow-normal",BiasInitializer="narrow-normal")];
​
net = dlnetwork(layers);
​
fcWeights = dlarray(0.01*randn(1,4096));
fcBias = dlarray(0.01*randn(1,1));
​
fcParams = struct(...
    "FcWeights",fcWeights,...
    "FcBias",fcBias);
%% 定义损失函数
numIterations = 10000;
miniBatchSize = 180;
learningRate = 6e-5;
gradDecay = 0.9;
gradDecaySq = 0.99;
executionEnvironment = "auto";
​
if canUseGPU
    gpu = gpuDevice;
    disp(gpu.Name + " GPU detected and available for training.")
end
​
trailingAvgSubnet = [];
trailingAvgSqSubnet = [];
trailingAvgParams = [];
trailingAvgSqParams = [];
​
monitor = trainingProgressMonitor(Metrics="Loss",XLabel="Iteration",Info="ExecutionEnvironment");
if canUseGPU
    updateInfo(monitor,ExecutionEnvironment=gpu.Name + " GPU")
else
    updateInfo(monitor,ExecutionEnvironment="CPU")
end
​
start = tic;
iteration = 0;
%% 使用自定义训练循环训练模型。循环遍历训练数据并在每次迭代时更新网络参数。
while iteration < numIterations && ~monitor.Stop
​
    iteration = iteration + 1;
​
    
    [X1,X2,pairLabels] = getTwinBatch(imdsTrain,miniBatchSize);
​
    
    X1 = dlarray(X1,"SSCB");
    X2 = dlarray(X2,"SSCB");
​
  
    if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
        X1 = gpuArray(X1);
        X2 = gpuArray(X2);
    end
​
    % 使用dlfeval和modelLoss评估模型损失和梯度
    [loss,gradientsSubnet,gradientsParams] = dlfeval(@modelLoss,net,fcParams,X1,X2,pairLabels);
​
    % 更新孪生神经网络参数
    [net,trailingAvgSubnet,trailingAvgSqSubnet] = adamupdate(net,gradientsSubnet, ...
        trailingAvgSubnet,trailingAvgSqSubnet,iteration,learningRate,gradDecay,gradDecaySq);
​
    % 更新全连接层参数.
    [fcParams,trailingAvgParams,trailingAvgSqParams] = adamupdate(fcParams,gradientsParams, ...
        trailingAvgParams,trailingAvgSqParams,iteration,learningRate,gradDecay,gradDecaySq);
​
    recordMetrics(monitor,iteration,Loss=loss);
    monitor.Progress = 100 * iteration/numIterations;
​
end
%% 读取测试集
dataFolderTest = "D:\S\孪生神经网络\images_evaluation" %% 更换路径
imdsTest = imageDatastore(dataFolderTest, ...
    IncludeSubfolders=true, ...
    LabelSource="none");
​
files = imdsTest.Files;
parts = split(files,filesep);
labels = join(parts(:,(end-2):(end-1)),"_");
imdsTest.Labels = categorical(labels);
​
numClasses = numel(unique(imdsTest.Labels));
accuracy = zeros(1,5);
accuracyBatchSize = 150;
​
for i = 1:5
    
    [X1,X2,pairLabelsAcc] = getTwinBatch(imdsTest,accuracyBatchSize);
​
   
    X1 = dlarray(X1,"SSCB");
    X2 = dlarray(X2,"SSCB");
​
   
    if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
        X1 = gpuArray(X1);
        X2 = gpuArray(X2);
    end
​
  
    Y = predictTwin(net,fcParams,X1,X2);
​
    Y = gather(extractdata(Y));
    Y = round(Y);
​
​
    accuracy(i) = sum(Y == pairLabelsAcc)/accuracyBatchSize;
end
​
averageAccuracy = mean(accuracy)*100;
​
testBatchSize = 10;
​
[XTest1,XTest2,pairLabelsTest] = getTwinBatch(imdsTest,testBatchSize);
​
XTest1 = dlarray(XTest1,"SSCB");
XTest2 = dlarray(XTest2,"SSCB");
​
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
    XTest1 = gpuArray(XTest1);
    XTest2 = gpuArray(XTest2);
end
YScore = predictTwin(net,fcParams,XTest1,XTest2);
YScore = gather(extractdata(YScore));
YPred = round(YScore);
XTest1 = extractdata(XTest1);
XTest2 = extractdata(XTest2);
f = figure;
tiledlayout(2,5);
f.Position(3) = 2*f.Position(3);
​
predLabels = categorical(YPred,[0 1],["dissimilar" "similar"]);
targetLabels = categorical(pairLabelsTest,[0 1],["dissimilar","similar"]);
%% 用预测的标签和预测的分数绘制图像
for i = 1:numel(pairLabelsTest)
    nexttile
    imshow([XTest1(:,:,:,i) XTest2(:,:,:,i)]);
​
    title( ...
        "Target: " + string(targetLabels(i)) + newline + ...
        "Predicted: " + string(predLabels(i)) + newline + ...
        "Score: " + YScore(i))
end

仿真结果

孪生神经网络MATLAB实战[含源码]-LMLPHP

孪生神经网络MATLAB实战[含源码]-LMLPHP

孪生神经网络MATLAB实战[含源码]-LMLPHP 

01-12 10:27