卷积与池化(stanford练习解析)

前言

由于是用latex写的,pdf我不知道怎么传上来,转成图片放这里了,tex我也留下了,感兴趣的小伙伴如果有报告想用这个模板的可以直接用。话说我贴latex的时候怎么有些地方被解析了?谁能留言告诉我下怎么让markdown的某一块直接纯文本输出。

内容

11132141025ConvolutionandPooling_1.Png
11132141025ConvolutionandPooling_2.Png
11132141025ConvolutionandPooling_3.Png
11132141025ConvolutionandPooling_4.Png
11132141025ConvolutionandPooling_5.Png
11132141025ConvolutionandPooling_6.Png
11132141025ConvolutionandPooling_7.Png
11132141025ConvolutionandPooling_8.Png
11132141025ConvolutionandPooling_9.Png
11132141025ConvolutionandPooling_10.Png
11132141025ConvolutionandPooling_11.Png
11132141025ConvolutionandPooling_12.Png
11132141025ConvolutionandPooling_13.Png
11132141025ConvolutionandPooling_14.Png
11132141025ConvolutionandPooling_15.Png

tex文件

\documentclass[a4paper, 11pt]{article}

%%%%%% 导入包 %%%%%%
\usepackage{CJKutf8}
\usepackage{graphicx}
\usepackage[unicode]{hyperref}
\usepackage{xcolor}
\usepackage{cite}
\usepackage{indentfirst}
\usepackage{listings}
\usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}
\lstset{language=Matlab}%代码语言使用的是matlab
\lstset{breaklines}%自动将长的代码行换行排版
\lstset{extendedchars=false}%解决代码跨页时,章节标题,页眉等汉字不显示的问题


%%%%%% 设置字号 %%%%%%
\newcommand{\chuhao}{\fontsize{42pt}{\baselineskip}\selectfont}
\newcommand{\xiaochuhao}{\fontsize{36pt}{\baselineskip}\selectfont}
\newcommand{\yihao}{\fontsize{28pt}{\baselineskip}\selectfont}
\newcommand{\erhao}{\fontsize{21pt}{\baselineskip}\selectfont}
\newcommand{\xiaoerhao}{\fontsize{18pt}{\baselineskip}\selectfont}
\newcommand{\sanhao}{\fontsize{15.75pt}{\baselineskip}\selectfont}
\newcommand{\sihao}{\fontsize{14pt}{\baselineskip}\selectfont}
\newcommand{\xiaosihao}{\fontsize{12pt}{\baselineskip}\selectfont}
\newcommand{\wuhao}{\fontsize{10.5pt}{\baselineskip}\selectfont}
\newcommand{\xiaowuhao}{\fontsize{9pt}{\baselineskip}\selectfont}
\newcommand{\liuhao}{\fontsize{7.875pt}{\baselineskip}\selectfont}
\newcommand{\qihao}{\fontsize{5.25pt}{\baselineskip}\selectfont}

%%%% 设置 section 属性 %%%%
\makeatletter
\renewcommand\section{\@startsection{section}{1}{\z@}%
{-1.5ex \@plus -.5ex \@minus -.2ex}%
{.5ex \@plus .1ex}%
{\normalfont\sihao\CJKfamily{hei}}}
\makeatother

%%%% 设置 subsection 属性 %%%%
\makeatletter
\renewcommand\subsection{\@startsection{subsection}{1}{\z@}%
{-1.25ex \@plus -.5ex \@minus -.2ex}%
{.4ex \@plus .1ex}%
{\normalfont\xiaosihao\CJKfamily{hei}}}
\makeatother

%%%% 设置 subsubsection 属性 %%%%
\makeatletter
\renewcommand\subsubsection{\@startsection{subsubsection}{1}{\z@}%
{-1ex \@plus -.5ex \@minus -.2ex}%
{.3ex \@plus .1ex}%
{\normalfont\xiaosihao\CJKfamily{hei}}}
\makeatother

%%%% 段落首行缩进两个字 %%%%
\makeatletter
\let\@afterindentfalse\@afterindenttrue
\@afterindenttrue
\makeatother
\setlength{\parindent}{2em}  %中文缩进两个汉字位


%%%% 下面的命令重定义页面边距,使其符合中文刊物习惯 %%%%
\addtolength{\topmargin}{-54pt}
\setlength{\oddsidemargin}{0.63cm}  % 3.17cm - 1 inch
\setlength{\evensidemargin}{\oddsidemargin}
\setlength{\textwidth}{14.66cm}
\setlength{\textheight}{24.00cm}    % 24.62

%%%% 下面的命令设置行间距与段落间距 %%%%
\linespread{1.4}
% \setlength{\parskip}{1ex}
\setlength{\parskip}{0.5\baselineskip}


%%%% 正文开始 %%%%
\begin{document}
\begin{CJK}{UTF8}{gbsn}

%%%% 定理类环境的定义 %%%%
\newtheorem{example}{例}             % 整体编号
\newtheorem{algorithm}{算法}
\newtheorem{theorem}{定理}[section]  % 按 section 编号
\newtheorem{definition}{定义}
\newtheorem{axiom}{公理}
\newtheorem{property}{性质}
\newtheorem{proposition}{命题}
\newtheorem{lemma}{引理}
\newtheorem{corollary}{推论}
\newtheorem{remark}{注解}
\newtheorem{condition}{条件}
\newtheorem{conclusion}{结论}
\newtheorem{assumption}{假设}

%%%% 重定义 %%%%
\renewcommand{\contentsname}{目录}  % 将Contents改为目录
\renewcommand{\abstractname}{摘要}  % 将Abstract改为摘要
\renewcommand{\refname}{参考文献}   % 将References改为参考文献
\renewcommand{\indexname}{索引}
\renewcommand{\figurename}{图}
\renewcommand{\tablename}{表}
\renewcommand{\appendixname}{附录}
\renewcommand{\algorithm}{算法}


%%%% 定义标题格式,包括title,author,affiliation,email等 %%%%
\title{\textbf{卷积与池化解析报告\\Report of Convolution and Pooling}}
\author{梁誉译\footnote{电子邮件: lwcoder@outlook.com,学号: 2014141463106}\\[2ex]
\xiaosihao 四川大学软件学院\\[2ex]
}
\date{2016年11月}





%%%% 以下部分是正文 %%%%  
\maketitle

\tableofcontents
\newpage


\section{Question}
本练习先补全卷积和池化相关代码,然后用数据集通过卷积层和池化层之后得到的特征来训练softmax分类器并对数据集中的4类物体(飞机、车、猫、狗)进行分类,实验参考的是斯坦福网页教程:Exercise:Convolution and Pooling。

\section{Data}
数据有两组,一个是UFLDL提供的STL-10 dataset的子集。另一个是之前练习中学习到的特征的STL10Features


\section{Algorithm}
在处理大型图片时(比如96*96的图像),如果还是按照过去的神经网络模型使用全连接的话,网络中的参数个数会变得非常庞大,这样庞大的参数所需要的训练时间几乎是不能被接受的。卷积神经网络是人工神经网络的一种,它的权值共享和部分连接的类似生物神经网络,降低了网络模型的复杂度,减少了权值的数量。在卷积神经网络中有两个非常重要的概念,而且也是这篇解析的核心————卷积和池化。
\subsection{Convolution}
要说到卷积就必须先提到局部感知野,上面也说到了卷积神经网络(CNN)通过权值共享和部分连接来减少神经网络中参数的数量,一般认为人对外界的认知是从局部到全局的,而图像的空间联系也是局部的像素联系较为紧密,而距离较远的像素相关性则较弱。因而,每个神经元其实没有必要对全局图像进行感知,只需要对局部进行感知,然后在更高层将局部的信息综合起来就得到了全局的信息。部分连接也是受启发于生物学里面的视觉系统结构。视觉皮层的神经元就是局部接受信息的(即这些神经元只响应某些特定区域的刺激)。如图\ref{fig:1}所示:左图为全连接,右图为局部连接。
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{images/1.jpg}
\caption{全连接(左)与部分连接(右)}
\label{fig:1}
\end{figure}

第二个就是权值共享,可以将其看成是提取特征的方式,该方式与位置无关。这其中隐含的原理则是:图像的一部分的统计特性与其他部分是一样的。这也意味着我们在这一部分学习的特征也能用在另一部分上,所以对于这个图像上的所有位置,我们都能使用同样的学习特征。由于权值共享,减少了网络的参数个数,也大提高了网络的学习效率。

\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{images/2.png}
\caption{卷积过程示意图}
\label{fig:2}
\end{figure}




\subsection{Pooling}
\indent池化是一种降采样方法如图\ref{fig:3}所示。在通过卷积获取图像特征之后是利用这些特征进行分类。可以用所有提取到的特征数据进行分类器的训练,但这通常会产生极大的计算量。所以在获取图像的卷积特征后,要通过池化方法对卷积特征进行降维。将卷积特征划分为数个n*n的不相交区域,用这些区域的最大(或平均)特征来表示降维后的卷积特征。

\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{images/3.png}
\caption{池化过程示意图}
\label{fig:3}
\end{figure}


\subsection{Summary}
总之,卷积和池化将局部感受野、权值共享以及亚采样这三种结构思想结合起来获得了某种程度的位移、尺度、形变不变性而且大大减少了需要训练的参数。




\section{Analysis of code}

\subsection{cnnConvolve.m}

\begin{lstlisting}
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  patchDim - patch (feature) dimension
%  numFeatures - number of features
%  images - large images to convolve with, matrix in the form
%           images(r, c, channel, image number)
%  W, b - W, b for features from the sparse autoencoder
%  ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
%                        preprocessing
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
\end{lstlisting}
函数convolvedFeatures输入每一片的维度(就是卷积核的维度),还有卷积核的数量以及图片信息还有向量权值W和阈值b。还有用于预处理的两项。
\begin{lstlisting}
patchSize = patchDim*patchDim;
assert(numFeatures == size(W,1),"W should have numFeatures rows");
numImages = size(images, 4);%第4维的大小,即图片的样本数
imageDim = size(images, 1);%第1维的大小,即图片的行数
imageChannels = size(images, 3);%第3维的大小,即图片的通道数
assert(patchSize*imageChannels == size(W,2), "W should have patchSize*imageChannels cols");
\end{lstlisting}
计算卷积核的大小并提取了所需的一些变量比如图片数、图片维度、图片通道,做了防御式编程。

\begin{lstlisting}
% Instructions:
%   Convolve every feature with every large image here to produce the 
%   numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) 
%   matrix convolvedFeatures, such that 
%   convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times: 
%   Convolving with 100 images should take less than 3 minutes 
%   Convolving with 5000 images should take around an hour
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)

% -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps

WT = W*ZCAWhite;%等效的网络参数
b_mean = b - WT*meanPatch;%针对未均值化的输入数据需要加入该项

% --------------------------------------------------------
\end{lstlisting}
由于输入的数据是大的图片,所以每次进行convolution时都需要进行whitening和网络的权值计算,这样每一个学习到的隐含层节点的特征对每一张图片都可以得到一张稍小的特征图片

\begin{lstlisting}
convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
  for featureNum = 1:numFeatures

    % convolution of image with feature matrix for each channel
    convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
    for channel = 1:imageChannels

      % Obtain the feature (patchDim x patchDim) needed during the convolution
      % ---- YOUR CODE HERE ----
      offset = (channel-1)*patchSize;
      feature = reshape(WT(featureNum,offset+1:offset+patchSize), patchDim, patchDim);%取一个权值图像块出来
      im  = images(:,:,channel,imageNum);

      % Flip the feature matrix because of the definition of convolution, as explained later
      feature = flipud(fliplr(squeeze(feature)));
      
      % Obtain the image
      im = squeeze(images(:, :, channel, imageNum));%取一张图片出来

      % Convolve "feature" with "im", adding the result to convolvedImage
      % be sure to do a "valid" convolution
      % ---- YOUR CODE HERE ----
      convolvedoneChannel = conv2(im, feature, "valid");
      convolvedImage = convolvedImage + convolvedoneChannel;%直接把3通道的值加起来,理由:3通道相当于有3个feature-map,类似于cnn第2层以后的输入。
      
      % ------------------------
    end
    % Subtract the bias unit (correcting for the mean subtraction as well)
    % Then, apply the sigmoid function to get the hidden activation
    % ---- YOUR CODE HERE ----
    convolvedImage = sigmoid(convolvedImage+b_mean(featureNum));
    % ------------------------
    % The convolved feature is the sum of the convolved values for all channels
    convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
  end
end
\end{lstlisting}
卷积的过程如\ref{fig:2}一样,这里对每一张图片用每一个卷积核去过滤(然后一个图片三个通道RGB一个图片一个卷积核生成3个特征图,这里把三个通道的特征图加起来了),用conv2函数来对当前图片im进行卷积,参数shape设为valid表示不考虑边界补零。将卷积后的特征图加上做均值化处理后的阈值一起(Wx+b)输入激活函数得到当前层的输出。


\subsection{cnnPool.m}
\begin{lstlisting}
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
%  poolDim - dimension of pooling region
%  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
%  pooledFeatures - matrix of pooled features in the form
%                   pooledFeatures(featureNum, imageNum, poolRow, poolCol)
%     

numImages = size(convolvedFeatures, 2);%图片数
numFeatures = size(convolvedFeatures, 1);%特征数
convolvedDim = size(convolvedFeatures, 3);%图片的行数
resultDim  = floor(convolvedDim / poolDim);
pooledFeatures = zeros(numFeatures, numImages, resultDim, resultDim);

% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Now pool the convolved features in regions of poolDim x poolDim,
%   to obtain the 
%   numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim) 
%   matrix pooledFeatures, such that
%   pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the 
%   value of the featureNum feature for the imageNum image pooled over the
%   corresponding (poolRow, poolCol) pooling region 
%   (see http://ufldl/wiki/index.php/Pooling )
%   
%   Use mean pooling here.
% -------------------- YOUR CODE HERE --------------------
for imageNum = 1:numImages
    for featureNum = 1:numFeatures
        for poolRow = 1:resultDim
            offsetRow = 1+(poolRow-1)*poolDim;
            for poolCol = 1:resultDim
                offsetCol = 1+(poolCol-1)*poolDim;
                patch = convolvedFeatures(featureNum,imageNum,offsetRow:offsetRow+poolDim-1,...
                    offsetCol:offsetCol+poolDim-1);%取出一个patch
                pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(patch(:));%使用均值pool
            end
        end
    end
end

end
\end{lstlisting}
池化的过程如图\ref{fig:3}一样,输入池化的边长和卷积后的特征图。输出池化后的特征。在循环中对于那一片区域使用均值池化mean(patch(:)),最后得到池化后的特征。

\subsection{CnnExercise.m}
\begin{lstlisting}
%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageDim = 64;         % image dimension
imageChannels = 3;     % number of channels (rgb, so 3)

patchDim = 8;          % patch dimension
numPatches = 50000;    % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units ,8*8*3=192
outputSize = visibleSize;   % number of output units
hiddenSize = 400;           % number of hidden units 

epsilon = 0.1;           % epsilon for ZCA whitening

poolDim = 19;          % dimension of pooling region
\end{lstlisting}
首先初始化各种参数,如数据大小、通道数量、卷积核大小、隐含层神经元个数、池化大小等。

\begin{lstlisting}
%%======================================================================
%% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn 
%  features from color patches. If you have completed the linear decoder
%  execise, use the features that you have obtained from that exercise, 
%  loading them into optTheta. Recall that we have to keep around the 
%  parameters used in whitening (i.e., the ZCA whitening matrix and the
%  meanPatch)

% --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with 
% the optimal parameters:

optTheta =  zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);%对patch网络作用的所有参数个数
ZCAWhite =  zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1);
load STL10Features.mat;
% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

\end{lstlisting}
加载了之前用稀疏自编码器学习得到的特征。
\begin{lstlisting}
load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels

%% Use only the first 8 images for testing
convImages = trainImages(:, :, :, 1:8); 

% NOTE: Implement cnnConvolve in cnnConvolve.m first!w和b已经是矩阵或向量的形式了
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);
\end{lstlisting}
加载了数据集并划分了数据集为训练集和测试集,并使用之前完成的cnnConvolve


\begin{lstlisting}
%% STEP 2b: Checking your convolution
%  To ensure that you have convolved the features correctly, we have
%  provided some code to compare the results of your convolution with
%  activations from the sparse autoencoder

% For 1000 random points
for i = 1:1000    
    featureNum = randi([1, hiddenSize]);%随机选取一个特征
    imageNum = randi([1, 8]);%随机选取一个样本
    imageRow = randi([1, imageDim - patchDim + 1]);%随机选取一个点
    imageCol = randi([1, imageDim - patchDim + 1]);    
   
    %在那8张图片中随机选取1张图片,然后又根据随机选取的左上角点选取1个patch
    patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
    patch = patch(:); %这样是按照列的顺序来排列的           
    patch = patch - meanPatch;
    patch = ZCAWhite * patch;%用同样的参数对该patch进行白化处理
    
    features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); %计算出该patch的输出值

    if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
        fprintf("Convolved feature does not match activation from autoencoder\n");
        fprintf("Feature Number    : %d\n", featureNum);
        fprintf("Image Number      : %d\n", imageNum);
        fprintf("Image Row         : %d\n", imageRow);
        fprintf("Image Column      : %d\n", imageCol);
        fprintf("Convolved feature : %0.5f\n", convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
        fprintf("parse AE feature : %0.5f\n", features(featureNum, 1));       
        error("Convolved feature does not match activation from autoencoder");
    end 
end

disp("Congratulations! Your convolution code passed the test.");
\end{lstlisting}
利用之前实现的稀疏自编码器来检验卷积实现的正确性。

\begin{lstlisting}
%% STEP 2c: Implement pooling
%  Implement pooling in the function cnnPool in cnnPool.m

% NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures);

%% STEP 2d: Checking your pooling
%  To ensure that you have implemented pooling, we will use your pooling
%  function to pool over a test matrix and check the results.

testMatrix = reshape(1:64, 8, 8);%将1~64这64个数字弄成一个矩阵,按列的方向依次递增
%直接计算均值pooling值
expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
                  mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ];
            
testMatrix = reshape(testMatrix, 1, 1, 8, 8);

%squeeze去掉维度为1的那一维
pooledFeatures = squeeze(cnnPool(4, testMatrix));%参数值为4表明是对4*4的区域进行pooling

if ~isequal(pooledFeatures, expectedMatrix)
    disp("Pooling incorrect");
    disp("Expected");
    disp(expectedMatrix);
    disp("Got");
    disp(pooledFeatures);
else
    disp("Congratulations! Your pooling code passed the test.");
end
\end{lstlisting}
通过生成一个1-64的8*8的矩阵并按4*4的大小来直接测试池化实现是否正确。

\begin{lstlisting}
%%======================================================================
%% STEP 3: Convolve and pool with the dataset
%  In this step, you will convolve each of the features you learned with
%  the full large images to obtain the convolved features. You will then
%  pool the convolved features to obtain the pooled features for
%  classification.
%
%  Because the convolved features matrix is very large, we will do the
%  convolution and pooling 50 features at a time to avoid running out of
%  memory. Reduce this number if necessary

stepSize = 50;
assert(mod(hiddenSize, stepSize) == 0, "stepSize should divide hiddenSize");%hiddenSize/stepSize为整数,这里分8次进行

load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat  % loads numTestImages,  testImages,  testLabels

pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...%image是大图片的尺寸,这里为64
    floor((imageDim - patchDim + 1) / poolDim), ... %.poolDim为多大的区域pool一次,这里为19,即19*19大小pool一次.
    floor((imageDim - patchDim + 1) / poolDim) );%最后算出的pooledFeaturesTrain大小为400*2000*3*3
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
    floor((imageDim - patchDim + 1) / poolDim), ...
    floor((imageDim - patchDim + 1) / poolDim) );%pooledFeaturesTest大小为400*3200*3*3

tic();

\end{lstlisting}
加载训练集和测试集来训练网络并供后面分类测试正确率做准备。

\begin{lstlisting}
for convPart = 1:(hiddenSize / stepSize)%stepSize表示分批次进行原始图片数据的特征提取,一次进行stepSize个隐含层节点
    
    featureStart = (convPart - 1) * stepSize + 1;%选取起始的特征
    featureEnd = convPart * stepSize;%选取结束的特征
    
    fprintf("Step %d: features %d to %d\n", convPart, featureStart, featureEnd);  
    Wt = W(featureStart:featureEnd, :);
    bt = b(featureStart:featureEnd);    
    
    fprintf("Convolving and pooling train images\n");
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...%参数2表示的是当前"隐含层"节点的个数
        trainImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();
    clear convolvedFeaturesThis pooledFeaturesThis;%这些大的变量在不用的情况下全部删除掉,因为后面用的是test部分
    
    fprintf("Convolving and pooling test images\n");
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
        testImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();

    clear convolvedFeaturesThis pooledFeaturesThis;

end
% You might want to save the pooled features since convolution and pooling takes a long time
save("cnnPooledFeatures.mat", "pooledFeaturesTrain", "pooledFeaturesTest");
toc();
\end{lstlisting}
分别让训练集图片和测试集图片经过卷积层和池化层,最后出来池化后的特征放入对应的变量中。

\begin{lstlisting}
%% STEP 4: Use pooled features for classification
%  Now, you will use your pooled features to train a softmax classifier,
%  using softmaxTrain from the softmax exercise.
%  Training the softmax classifer for 1000 iterations should take less than
%  10 minutes.

% Add the path to your softmax solution, if necessary
% addpath /path/to/solution/

% Setup parameters for softmax
softmaxLambda = 1e-4;%权值惩罚系数
numClasses = 4;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]);%permute是调整顺序,把图片放在最后
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...%numel(pooledFeaturesTrain) / numTrainImages
                        numTrainImages);                                    %为每一张图片得到的特征向量长度                                                             
    
softmaxY = trainLabels;

options = struct;
options.maxIter = 200;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...%第一个参数为inputSize
    numClasses, softmaxLambda, softmaxX, softmaxY, options);
\end{lstlisting}
softmax是一种多分类器,将之前得到的训练集池化后的特征以及训练集的标签输入这个分类器中训练这个分类器。
\begin{lstlisting}
%% STEP 5: Test classifer
%  Now you will test your trained classifer against the test images

softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels;

[pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, 1);
fprintf("Accuracy: %2.3f%%\n", acc * 100);%计算预测准确度

% You should expect to get an accuracy of around 80% on the test images.
\end{lstlisting}
最后利用之前训练好的softmax分类器测试集进行分类并比对测试集的label来计算正确率。

最后结果(没有充分环境进行训练,结果由代码编写者所得):

Accuracy: 80.406\%

与教程中的80\%相差无几



\section{Summary}
卷积和池化的局部连接权值共享大大降低了网络中参数的个数,在处理大图像的时候非常明显。且卷积和池化后让图像具有一定的平移不变性(一个是卷积时候图像平移特征也平移,另一个是池化时候取最大或平均也让其具有了一定的不变性)



\end{CJK}
\end{document}
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,670评论 5 460
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,928评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,926评论 0 320
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,238评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,112评论 4 356
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,138评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,545评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,232评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,496评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,596评论 2 310
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,369评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,226评论 3 313
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,600评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,906评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,185评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,516评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,721评论 2 335

推荐阅读更多精彩内容