LetNet模型

LetNet是Caffe入门级的网络模型,是一个用来识别手写数字的最经典的卷积神经网络(CNN),是Yann LeCun在1998年设计并提出的。

1.下载数据并解压

$ cd ~/caffe/data/mnist
$ ./get_mnist.sh

下载
训练样本:train-images-idx3-ubyte.gz
训练标签:train-labels-idx1-ubyte.gz
测试数据:t10k-images-idx3-ubyte.gz
测试标签:t10k-labels-idx1-ubyte.gz
并解压。

get_mnist.sh文件有点问题,修改如下:

#!/usr/bin/env sh
# This scripts downloads the mnist data and unzips it.

DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd "$DIR"

echo "Downloading..."
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz

echo "Unzipping..."
gunzip train-images-idx3-ubyte.gz
gunzip train-labels-idx1-ubyte.gz
gunzip t10k-images-idx3-ubyte.gz
gunzip t10k-labels-idx1-ubyte.gz

echo "Done.."

2.生成LMDB

LMDB是一个超级快、超级小的Key-Value数据存储服务,使用内存映射文件,其读取性能和内存数据读取数据库一样。

$ cd ~/caffe
$ ./examples/mnist/create_mnist.sh

create_mnist.sh文件利用caffe/build/examples/mnist/convert_mnist_data.bin工具,将mnist数据转化为caffe可用的lmdb格式文件,让后将mnist-train-lmdb和mnist-test-lmdb文件放在caffe/example/mnist目录下。

3.网络配置

LeNet的网络配置再caffe/examples/mnist/lenet_train_test_prototxt文件中。
基本不要修改什么

4. 训练网络

$ ./examples/mnist/train_lenet.sh

此脚本实际上是再执行./examples/mnist/lenet_solver.prototxt中的内容
训练了很长时间


fc@fc-pc:~/caffe$ ./examples/mnist/train_lenet.sh 
I0112 13:56:57.294195  4605 caffe.cpp:218] Using GPUs 0
I0112 13:56:57.440893  4605 caffe.cpp:223] GPU 0: GeForce GT 635M
I0112 13:56:58.157379  4605 solver.cpp:44] Initializing solver from parameters: 
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
I0112 13:56:58.157559  4605 solver.cpp:87] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
I0112 13:56:58.164427  4605 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0112 13:56:58.164482  4605 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0112 13:56:58.164703  4605 net.cpp:51] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0112 13:56:58.164865  4605 layer_factory.hpp:77] Creating layer mnist
I0112 13:56:58.165019  4605 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0112 13:56:58.165051  4605 net.cpp:84] Creating Layer mnist
I0112 13:56:58.165066  4605 net.cpp:380] mnist -> data
I0112 13:56:58.165098  4605 net.cpp:380] mnist -> label
I0112 13:56:58.165763  4605 data_layer.cpp:45] output data size: 64,1,28,28
I0112 13:56:58.166887  4605 net.cpp:122] Setting up mnist
I0112 13:56:58.166927  4605 net.cpp:129] Top shape: 64 1 28 28 (50176)
I0112 13:56:58.166939  4605 net.cpp:129] Top shape: 64 (64)
I0112 13:56:58.166949  4605 net.cpp:137] Memory required for data: 200960
I0112 13:56:58.166966  4605 layer_factory.hpp:77] Creating layer conv1
I0112 13:56:58.167002  4605 net.cpp:84] Creating Layer conv1
I0112 13:56:58.167016  4605 net.cpp:406] conv1 <- data
I0112 13:56:58.167037  4605 net.cpp:380] conv1 -> conv1
I0112 13:56:58.168071  4605 net.cpp:122] Setting up conv1
I0112 13:56:58.168107  4605 net.cpp:129] Top shape: 64 20 24 24 (737280)
I0112 13:56:58.168115  4605 net.cpp:137] Memory required for data: 3150080
I0112 13:56:58.168153  4605 layer_factory.hpp:77] Creating layer pool1
I0112 13:56:58.168172  4605 net.cpp:84] Creating Layer pool1
I0112 13:56:58.168211  4605 net.cpp:406] pool1 <- conv1
I0112 13:56:58.168227  4605 net.cpp:380] pool1 -> pool1
I0112 13:56:58.168300  4605 net.cpp:122] Setting up pool1
I0112 13:56:58.168318  4605 net.cpp:129] Top shape: 64 20 12 12 (184320)
I0112 13:56:58.168326  4605 net.cpp:137] Memory required for data: 3887360
I0112 13:56:58.168340  4605 layer_factory.hpp:77] Creating layer conv2
I0112 13:56:58.168362  4605 net.cpp:84] Creating Layer conv2
I0112 13:56:58.168375  4605 net.cpp:406] conv2 <- pool1
I0112 13:56:58.168390  4605 net.cpp:380] conv2 -> conv2
I0112 13:56:58.169091  4605 net.cpp:122] Setting up conv2
I0112 13:56:58.169117  4605 net.cpp:129] Top shape: 64 50 8 8 (204800)
I0112 13:56:58.169127  4605 net.cpp:137] Memory required for data: 4706560
I0112 13:56:58.169147  4605 layer_factory.hpp:77] Creating layer pool2
I0112 13:56:58.169167  4605 net.cpp:84] Creating Layer pool2
I0112 13:56:58.169178  4605 net.cpp:406] pool2 <- conv2
I0112 13:56:58.169191  4605 net.cpp:380] pool2 -> pool2
I0112 13:56:58.169292  4605 net.cpp:122] Setting up pool2
I0112 13:56:58.169311  4605 net.cpp:129] Top shape: 64 50 4 4 (51200)
I0112 13:56:58.169318  4605 net.cpp:137] Memory required for data: 4911360
I0112 13:56:58.169327  4605 layer_factory.hpp:77] Creating layer ip1
I0112 13:56:58.169340  4605 net.cpp:84] Creating Layer ip1
I0112 13:56:58.169349  4605 net.cpp:406] ip1 <- pool2
I0112 13:56:58.169364  4605 net.cpp:380] ip1 -> ip1
I0112 13:56:58.174051  4605 net.cpp:122] Setting up ip1
I0112 13:56:58.174082  4605 net.cpp:129] Top shape: 64 500 (32000)
I0112 13:56:58.174096  4605 net.cpp:137] Memory required for data: 5039360
I0112 13:56:58.174114  4605 layer_factory.hpp:77] Creating layer relu1
I0112 13:56:58.174127  4605 net.cpp:84] Creating Layer relu1
I0112 13:56:58.174134  4605 net.cpp:406] relu1 <- ip1
I0112 13:56:58.174146  4605 net.cpp:367] relu1 -> ip1 (in-place)
I0112 13:56:58.174160  4605 net.cpp:122] Setting up relu1
I0112 13:56:58.174166  4605 net.cpp:129] Top shape: 64 500 (32000)
I0112 13:56:58.174171  4605 net.cpp:137] Memory required for data: 5167360
I0112 13:56:58.174177  4605 layer_factory.hpp:77] Creating layer ip2
I0112 13:56:58.174187  4605 net.cpp:84] Creating Layer ip2
I0112 13:56:58.174192  4605 net.cpp:406] ip2 <- ip1
I0112 13:56:58.174202  4605 net.cpp:380] ip2 -> ip2
I0112 13:56:58.174839  4605 net.cpp:122] Setting up ip2
I0112 13:56:58.174861  4605 net.cpp:129] Top shape: 64 10 (640)
I0112 13:56:58.174867  4605 net.cpp:137] Memory required for data: 5169920
I0112 13:56:58.174880  4605 layer_factory.hpp:77] Creating layer loss
I0112 13:56:58.174892  4605 net.cpp:84] Creating Layer loss
I0112 13:56:58.174898  4605 net.cpp:406] loss <- ip2
I0112 13:56:58.174906  4605 net.cpp:406] loss <- label
I0112 13:56:58.174917  4605 net.cpp:380] loss -> loss
I0112 13:56:58.174937  4605 layer_factory.hpp:77] Creating layer loss
I0112 13:56:58.175021  4605 net.cpp:122] Setting up loss
I0112 13:56:58.175031  4605 net.cpp:129] Top shape: (1)
I0112 13:56:58.175037  4605 net.cpp:132]     with loss weight 1
I0112 13:56:58.175063  4605 net.cpp:137] Memory required for data: 5169924
I0112 13:56:58.175070  4605 net.cpp:198] loss needs backward computation.
I0112 13:56:58.175079  4605 net.cpp:198] ip2 needs backward computation.
I0112 13:56:58.175086  4605 net.cpp:198] relu1 needs backward computation.
I0112 13:56:58.175092  4605 net.cpp:198] ip1 needs backward computation.
I0112 13:56:58.175098  4605 net.cpp:198] pool2 needs backward computation.
I0112 13:56:58.175104  4605 net.cpp:198] conv2 needs backward computation.
I0112 13:56:58.175110  4605 net.cpp:198] pool1 needs backward computation.
I0112 13:56:58.175117  4605 net.cpp:198] conv1 needs backward computation.
I0112 13:56:58.175123  4605 net.cpp:200] mnist does not need backward computation.
I0112 13:56:58.175128  4605 net.cpp:242] This network produces output loss
I0112 13:56:58.175140  4605 net.cpp:255] Network initialization done.
I0112 13:56:58.175314  4605 solver.cpp:172] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I0112 13:56:58.175365  4605 net.cpp:294] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0112 13:56:58.175464  4605 net.cpp:51] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0112 13:56:58.175591  4605 layer_factory.hpp:77] Creating layer mnist
I0112 13:56:58.175654  4605 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0112 13:56:58.175673  4605 net.cpp:84] Creating Layer mnist
I0112 13:56:58.175683  4605 net.cpp:380] mnist -> data
I0112 13:56:58.175694  4605 net.cpp:380] mnist -> label
I0112 13:56:58.175781  4605 data_layer.cpp:45] output data size: 100,1,28,28
I0112 13:56:58.177809  4605 net.cpp:122] Setting up mnist
I0112 13:56:58.177858  4605 net.cpp:129] Top shape: 100 1 28 28 (78400)
I0112 13:56:58.177866  4605 net.cpp:129] Top shape: 100 (100)
I0112 13:56:58.177871  4605 net.cpp:137] Memory required for data: 314000
I0112 13:56:58.177881  4605 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0112 13:56:58.177896  4605 net.cpp:84] Creating Layer label_mnist_1_split
I0112 13:56:58.177904  4605 net.cpp:406] label_mnist_1_split <- label
I0112 13:56:58.177917  4605 net.cpp:380] label_mnist_1_split -> label_mnist_1_split_0
I0112 13:56:58.177932  4605 net.cpp:380] label_mnist_1_split -> label_mnist_1_split_1
I0112 13:56:58.178033  4605 net.cpp:122] Setting up label_mnist_1_split
I0112 13:56:58.178045  4605 net.cpp:129] Top shape: 100 (100)
I0112 13:56:58.178051  4605 net.cpp:129] Top shape: 100 (100)
I0112 13:56:58.178057  4605 net.cpp:137] Memory required for data: 314800
I0112 13:56:58.178062  4605 layer_factory.hpp:77] Creating layer conv1
I0112 13:56:58.178081  4605 net.cpp:84] Creating Layer conv1
I0112 13:56:58.178087  4605 net.cpp:406] conv1 <- data
I0112 13:56:58.178097  4605 net.cpp:380] conv1 -> conv1
I0112 13:56:58.178328  4605 net.cpp:122] Setting up conv1
I0112 13:56:58.178339  4605 net.cpp:129] Top shape: 100 20 24 24 (1152000)
I0112 13:56:58.178345  4605 net.cpp:137] Memory required for data: 4922800
I0112 13:56:58.178359  4605 layer_factory.hpp:77] Creating layer pool1
I0112 13:56:58.178395  4605 net.cpp:84] Creating Layer pool1
I0112 13:56:58.178401  4605 net.cpp:406] pool1 <- conv1
I0112 13:56:58.178411  4605 net.cpp:380] pool1 -> pool1
I0112 13:56:58.178453  4605 net.cpp:122] Setting up pool1
I0112 13:56:58.178463  4605 net.cpp:129] Top shape: 100 20 12 12 (288000)
I0112 13:56:58.178469  4605 net.cpp:137] Memory required for data: 6074800
I0112 13:56:58.178474  4605 layer_factory.hpp:77] Creating layer conv2
I0112 13:56:58.178488  4605 net.cpp:84] Creating Layer conv2
I0112 13:56:58.178495  4605 net.cpp:406] conv2 <- pool1
I0112 13:56:58.178505  4605 net.cpp:380] conv2 -> conv2
I0112 13:56:58.178928  4605 net.cpp:122] Setting up conv2
I0112 13:56:58.178944  4605 net.cpp:129] Top shape: 100 50 8 8 (320000)
I0112 13:56:58.178951  4605 net.cpp:137] Memory required for data: 7354800
I0112 13:56:58.178967  4605 layer_factory.hpp:77] Creating layer pool2
I0112 13:56:58.178977  4605 net.cpp:84] Creating Layer pool2
I0112 13:56:58.178983  4605 net.cpp:406] pool2 <- conv2
I0112 13:56:58.178998  4605 net.cpp:380] pool2 -> pool2
I0112 13:56:58.179042  4605 net.cpp:122] Setting up pool2
I0112 13:56:58.179051  4605 net.cpp:129] Top shape: 100 50 4 4 (80000)
I0112 13:56:58.179060  4605 net.cpp:137] Memory required for data: 7674800
I0112 13:56:58.179067  4605 layer_factory.hpp:77] Creating layer ip1
I0112 13:56:58.179078  4605 net.cpp:84] Creating Layer ip1
I0112 13:56:58.179085  4605 net.cpp:406] ip1 <- pool2
I0112 13:56:58.179095  4605 net.cpp:380] ip1 -> ip1
I0112 13:56:58.182580  4605 net.cpp:122] Setting up ip1
I0112 13:56:58.182610  4605 net.cpp:129] Top shape: 100 500 (50000)
I0112 13:56:58.182616  4605 net.cpp:137] Memory required for data: 7874800
I0112 13:56:58.182636  4605 layer_factory.hpp:77] Creating layer relu1
I0112 13:56:58.182648  4605 net.cpp:84] Creating Layer relu1
I0112 13:56:58.182656  4605 net.cpp:406] relu1 <- ip1
I0112 13:56:58.182665  4605 net.cpp:367] relu1 -> ip1 (in-place)
I0112 13:56:58.182677  4605 net.cpp:122] Setting up relu1
I0112 13:56:58.182685  4605 net.cpp:129] Top shape: 100 500 (50000)
I0112 13:56:58.182690  4605 net.cpp:137] Memory required for data: 8074800
I0112 13:56:58.182695  4605 layer_factory.hpp:77] Creating layer ip2
I0112 13:56:58.182708  4605 net.cpp:84] Creating Layer ip2
I0112 13:56:58.182714  4605 net.cpp:406] ip2 <- ip1
I0112 13:56:58.182724  4605 net.cpp:380] ip2 -> ip2
I0112 13:56:58.182857  4605 net.cpp:122] Setting up ip2
I0112 13:56:58.182868  4605 net.cpp:129] Top shape: 100 10 (1000)
I0112 13:56:58.182873  4605 net.cpp:137] Memory required for data: 8078800
I0112 13:56:58.182883  4605 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0112 13:56:58.182890  4605 net.cpp:84] Creating Layer ip2_ip2_0_split
I0112 13:56:58.182896  4605 net.cpp:406] ip2_ip2_0_split <- ip2
I0112 13:56:58.182905  4605 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0112 13:56:58.182915  4605 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0112 13:56:58.182950  4605 net.cpp:122] Setting up ip2_ip2_0_split
I0112 13:56:58.182961  4605 net.cpp:129] Top shape: 100 10 (1000)
I0112 13:56:58.182968  4605 net.cpp:129] Top shape: 100 10 (1000)
I0112 13:56:58.182973  4605 net.cpp:137] Memory required for data: 8086800
I0112 13:56:58.182979  4605 layer_factory.hpp:77] Creating layer accuracy
I0112 13:56:58.182988  4605 net.cpp:84] Creating Layer accuracy
I0112 13:56:58.182994  4605 net.cpp:406] accuracy <- ip2_ip2_0_split_0
I0112 13:56:58.183001  4605 net.cpp:406] accuracy <- label_mnist_1_split_0
I0112 13:56:58.183009  4605 net.cpp:380] accuracy -> accuracy
I0112 13:56:58.183022  4605 net.cpp:122] Setting up accuracy
I0112 13:56:58.183029  4605 net.cpp:129] Top shape: (1)
I0112 13:56:58.183035  4605 net.cpp:137] Memory required for data: 8086804
I0112 13:56:58.183040  4605 layer_factory.hpp:77] Creating layer loss
I0112 13:56:58.183048  4605 net.cpp:84] Creating Layer loss
I0112 13:56:58.183054  4605 net.cpp:406] loss <- ip2_ip2_0_split_1
I0112 13:56:58.183061  4605 net.cpp:406] loss <- label_mnist_1_split_1
I0112 13:56:58.183095  4605 net.cpp:380] loss -> loss
I0112 13:56:58.183106  4605 layer_factory.hpp:77] Creating layer loss
I0112 13:56:58.183189  4605 net.cpp:122] Setting up loss
I0112 13:56:58.183199  4605 net.cpp:129] Top shape: (1)
I0112 13:56:58.183204  4605 net.cpp:132]     with loss weight 1
I0112 13:56:58.183219  4605 net.cpp:137] Memory required for data: 8086808
I0112 13:56:58.183226  4605 net.cpp:198] loss needs backward computation.
I0112 13:56:58.183233  4605 net.cpp:200] accuracy does not need backward computation.
I0112 13:56:58.183239  4605 net.cpp:198] ip2_ip2_0_split needs backward computation.
I0112 13:56:58.183245  4605 net.cpp:198] ip2 needs backward computation.
I0112 13:56:58.183251  4605 net.cpp:198] relu1 needs backward computation.
I0112 13:56:58.183257  4605 net.cpp:198] ip1 needs backward computation.
I0112 13:56:58.183262  4605 net.cpp:198] pool2 needs backward computation.
I0112 13:56:58.183269  4605 net.cpp:198] conv2 needs backward computation.
I0112 13:56:58.183274  4605 net.cpp:198] pool1 needs backward computation.
I0112 13:56:58.183281  4605 net.cpp:198] conv1 needs backward computation.
I0112 13:56:58.183290  4605 net.cpp:200] label_mnist_1_split does not need backward computation.
I0112 13:56:58.183296  4605 net.cpp:200] mnist does not need backward computation.
I0112 13:56:58.183301  4605 net.cpp:242] This network produces output accuracy
I0112 13:56:58.183307  4605 net.cpp:242] This network produces output loss
I0112 13:56:58.183326  4605 net.cpp:255] Network initialization done.
I0112 13:56:58.183372  4605 solver.cpp:56] Solver scaffolding done.
I0112 13:56:58.183619  4605 caffe.cpp:248] Starting Optimization
I0112 13:56:58.183629  4605 solver.cpp:272] Solving LeNet
I0112 13:56:58.183634  4605 solver.cpp:273] Learning Rate Policy: inv
I0112 13:56:58.184096  4605 solver.cpp:330] Iteration 0, Testing net (#0)
I0112 13:57:00.461726  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:57:00.555826  4605 solver.cpp:397]     Test net output #0: accuracy = 0.1078
I0112 13:57:00.555896  4605 solver.cpp:397]     Test net output #1: loss = 2.30511 (* 1 = 2.30511 loss)
I0112 13:57:00.607538  4605 solver.cpp:218] Iteration 0 (-4.39738e+28 iter/s, 2.42383s/100 iters), loss = 2.28738
I0112 13:57:00.607615  4605 solver.cpp:237]     Train net output #0: loss = 2.28738 (* 1 = 2.28738 loss)
I0112 13:57:00.607636  4605 sgd_solver.cpp:105] Iteration 0, lr = 0.01
I0112 13:57:05.810878  4605 solver.cpp:218] Iteration 100 (19.219 iter/s, 5.2032s/100 iters), loss = 0.243285
I0112 13:57:05.810941  4605 solver.cpp:237]     Train net output #0: loss = 0.243285 (* 1 = 0.243285 loss)
I0112 13:57:05.810958  4605 sgd_solver.cpp:105] Iteration 100, lr = 0.00992565
I0112 13:57:11.014245  4605 solver.cpp:218] Iteration 200 (19.2188 iter/s, 5.20325s/100 iters), loss = 0.141302
I0112 13:57:11.014315  4605 solver.cpp:237]     Train net output #0: loss = 0.141302 (* 1 = 0.141302 loss)
I0112 13:57:11.014328  4605 sgd_solver.cpp:105] Iteration 200, lr = 0.00985258
I0112 13:57:16.216323  4605 solver.cpp:218] Iteration 300 (19.2236 iter/s, 5.20195s/100 iters), loss = 0.177265
I0112 13:57:16.216384  4605 solver.cpp:237]     Train net output #0: loss = 0.177265 (* 1 = 0.177265 loss)
I0112 13:57:16.216398  4605 sgd_solver.cpp:105] Iteration 300, lr = 0.00978075
I0112 13:57:21.422519  4605 solver.cpp:218] Iteration 400 (19.2083 iter/s, 5.20607s/100 iters), loss = 0.0997172
I0112 13:57:21.422600  4605 solver.cpp:237]     Train net output #0: loss = 0.0997172 (* 1 = 0.0997172 loss)
I0112 13:57:21.422614  4605 sgd_solver.cpp:105] Iteration 400, lr = 0.00971013
I0112 13:57:26.540989  4605 solver.cpp:330] Iteration 500, Testing net (#0)
I0112 13:57:28.847373  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:57:28.941399  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9714
I0112 13:57:28.941442  4605 solver.cpp:397]     Test net output #1: loss = 0.0865218 (* 1 = 0.0865218 loss)
I0112 13:57:28.991828  4605 solver.cpp:218] Iteration 500 (13.2115 iter/s, 7.56914s/100 iters), loss = 0.112185
I0112 13:57:28.991883  4605 solver.cpp:237]     Train net output #0: loss = 0.112185 (* 1 = 0.112185 loss)
I0112 13:57:28.991895  4605 sgd_solver.cpp:105] Iteration 500, lr = 0.00964069
I0112 13:57:34.193409  4605 solver.cpp:218] Iteration 600 (19.2253 iter/s, 5.20147s/100 iters), loss = 0.12664
I0112 13:57:34.193473  4605 solver.cpp:237]     Train net output #0: loss = 0.12664 (* 1 = 0.12664 loss)
I0112 13:57:34.193487  4605 sgd_solver.cpp:105] Iteration 600, lr = 0.0095724
I0112 13:57:39.398228  4605 solver.cpp:218] Iteration 700 (19.2134 iter/s, 5.2047s/100 iters), loss = 0.0915156
I0112 13:57:39.398288  4605 solver.cpp:237]     Train net output #0: loss = 0.0915156 (* 1 = 0.0915156 loss)
I0112 13:57:39.398303  4605 sgd_solver.cpp:105] Iteration 700, lr = 0.00950522
I0112 13:57:44.600630  4605 solver.cpp:218] Iteration 800 (19.2223 iter/s, 5.20229s/100 iters), loss = 0.177842
I0112 13:57:44.600690  4605 solver.cpp:237]     Train net output #0: loss = 0.177842 (* 1 = 0.177842 loss)
I0112 13:57:44.600700  4605 sgd_solver.cpp:105] Iteration 800, lr = 0.00943913
I0112 13:57:49.804349  4605 solver.cpp:218] Iteration 900 (19.2175 iter/s, 5.2036s/100 iters), loss = 0.163748
I0112 13:57:49.804414  4605 solver.cpp:237]     Train net output #0: loss = 0.163748 (* 1 = 0.163748 loss)
I0112 13:57:49.804431  4605 sgd_solver.cpp:105] Iteration 900, lr = 0.00937411
I0112 13:57:51.522743  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:57:54.921766  4605 solver.cpp:330] Iteration 1000, Testing net (#0)
I0112 13:57:57.231189  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:57:57.325402  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9824
I0112 13:57:57.325449  4605 solver.cpp:397]     Test net output #1: loss = 0.0566883 (* 1 = 0.0566883 loss)
I0112 13:57:57.375991  4605 solver.cpp:218] Iteration 1000 (13.2075 iter/s, 7.57145s/100 iters), loss = 0.0991981
I0112 13:57:57.376060  4605 solver.cpp:237]     Train net output #0: loss = 0.099198 (* 1 = 0.099198 loss)
I0112 13:57:57.376073  4605 sgd_solver.cpp:105] Iteration 1000, lr = 0.00931012
I0112 13:58:02.578084  4605 solver.cpp:218] Iteration 1100 (19.2235 iter/s, 5.20196s/100 iters), loss = 0.0074663
I0112 13:58:02.578302  4605 solver.cpp:237]     Train net output #0: loss = 0.00746628 (* 1 = 0.00746628 loss)
I0112 13:58:02.578320  4605 sgd_solver.cpp:105] Iteration 1100, lr = 0.00924715
I0112 13:58:07.777390  4605 solver.cpp:218] Iteration 1200 (19.2344 iter/s, 5.19903s/100 iters), loss = 0.0371387
I0112 13:58:07.777452  4605 solver.cpp:237]     Train net output #0: loss = 0.0371387 (* 1 = 0.0371387 loss)
I0112 13:58:07.777467  4605 sgd_solver.cpp:105] Iteration 1200, lr = 0.00918515
I0112 13:58:12.977054  4605 solver.cpp:218] Iteration 1300 (19.2325 iter/s, 5.19954s/100 iters), loss = 0.0171492
I0112 13:58:12.977105  4605 solver.cpp:237]     Train net output #0: loss = 0.0171492 (* 1 = 0.0171492 loss)
I0112 13:58:12.977118  4605 sgd_solver.cpp:105] Iteration 1300, lr = 0.00912412
I0112 13:58:18.175503  4605 solver.cpp:218] Iteration 1400 (19.237 iter/s, 5.19833s/100 iters), loss = 0.00672234
I0112 13:58:18.175563  4605 solver.cpp:237]     Train net output #0: loss = 0.0067223 (* 1 = 0.0067223 loss)
I0112 13:58:18.175576  4605 sgd_solver.cpp:105] Iteration 1400, lr = 0.00906403
I0112 13:58:23.290197  4605 solver.cpp:330] Iteration 1500, Testing net (#0)
I0112 13:58:25.597092  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:58:25.691124  4605 solver.cpp:397]     Test net output #0: accuracy = 0.984
I0112 13:58:25.691167  4605 solver.cpp:397]     Test net output #1: loss = 0.0488746 (* 1 = 0.0488746 loss)
I0112 13:58:25.741626  4605 solver.cpp:218] Iteration 1500 (13.2171 iter/s, 7.56596s/100 iters), loss = 0.101924
I0112 13:58:25.741715  4605 solver.cpp:237]     Train net output #0: loss = 0.101924 (* 1 = 0.101924 loss)
I0112 13:58:25.741730  4605 sgd_solver.cpp:105] Iteration 1500, lr = 0.00900485
I0112 13:58:30.940945  4605 solver.cpp:218] Iteration 1600 (19.2339 iter/s, 5.19916s/100 iters), loss = 0.119907
I0112 13:58:30.940995  4605 solver.cpp:237]     Train net output #0: loss = 0.119907 (* 1 = 0.119907 loss)
I0112 13:58:30.941007  4605 sgd_solver.cpp:105] Iteration 1600, lr = 0.00894657
I0112 13:58:36.140640  4605 solver.cpp:218] Iteration 1700 (19.2323 iter/s, 5.19957s/100 iters), loss = 0.0359353
I0112 13:58:36.140813  4605 solver.cpp:237]     Train net output #0: loss = 0.0359352 (* 1 = 0.0359352 loss)
I0112 13:58:36.140833  4605 sgd_solver.cpp:105] Iteration 1700, lr = 0.00888916
I0112 13:58:41.339848  4605 solver.cpp:218] Iteration 1800 (19.2346 iter/s, 5.19897s/100 iters), loss = 0.016428
I0112 13:58:41.339906  4605 solver.cpp:237]     Train net output #0: loss = 0.0164279 (* 1 = 0.0164279 loss)
I0112 13:58:41.339920  4605 sgd_solver.cpp:105] Iteration 1800, lr = 0.0088326
I0112 13:58:44.980896  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:58:46.538303  4605 solver.cpp:218] Iteration 1900 (19.2369 iter/s, 5.19834s/100 iters), loss = 0.127803
I0112 13:58:46.538360  4605 solver.cpp:237]     Train net output #0: loss = 0.127803 (* 1 = 0.127803 loss)
I0112 13:58:46.538373  4605 sgd_solver.cpp:105] Iteration 1900, lr = 0.00877687
I0112 13:58:51.650697  4605 solver.cpp:330] Iteration 2000, Testing net (#0)
I0112 13:58:53.958813  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:58:54.052958  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9861
I0112 13:58:54.052999  4605 solver.cpp:397]     Test net output #1: loss = 0.0423396 (* 1 = 0.0423396 loss)
I0112 13:58:54.103466  4605 solver.cpp:218] Iteration 2000 (13.2188 iter/s, 7.56501s/100 iters), loss = 0.0270834
I0112 13:58:54.103543  4605 solver.cpp:237]     Train net output #0: loss = 0.0270833 (* 1 = 0.0270833 loss)
I0112 13:58:54.103557  4605 sgd_solver.cpp:105] Iteration 2000, lr = 0.00872196
I0112 13:58:59.303009  4605 solver.cpp:218] Iteration 2100 (19.233 iter/s, 5.1994s/100 iters), loss = 0.013996
I0112 13:58:59.303068  4605 solver.cpp:237]     Train net output #0: loss = 0.0139959 (* 1 = 0.0139959 loss)
I0112 13:58:59.303081  4605 sgd_solver.cpp:105] Iteration 2100, lr = 0.00866784
I0112 13:59:04.504382  4605 solver.cpp:218] Iteration 2200 (19.2261 iter/s, 5.20126s/100 iters), loss = 0.0149617
I0112 13:59:04.504441  4605 solver.cpp:237]     Train net output #0: loss = 0.0149617 (* 1 = 0.0149617 loss)
I0112 13:59:04.504456  4605 sgd_solver.cpp:105] Iteration 2200, lr = 0.0086145
I0112 13:59:09.704370  4605 solver.cpp:218] Iteration 2300 (19.2312 iter/s, 5.19987s/100 iters), loss = 0.107121
I0112 13:59:09.704478  4605 solver.cpp:237]     Train net output #0: loss = 0.107121 (* 1 = 0.107121 loss)
I0112 13:59:09.704491  4605 sgd_solver.cpp:105] Iteration 2300, lr = 0.00856192
I0112 13:59:14.906342  4605 solver.cpp:218] Iteration 2400 (19.2241 iter/s, 5.20181s/100 iters), loss = 0.0108024
I0112 13:59:14.906400  4605 solver.cpp:237]     Train net output #0: loss = 0.0108024 (* 1 = 0.0108024 loss)
I0112 13:59:14.906414  4605 sgd_solver.cpp:105] Iteration 2400, lr = 0.00851008
I0112 13:59:20.023326  4605 solver.cpp:330] Iteration 2500, Testing net (#0)
I0112 13:59:22.333452  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:59:22.427494  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9836
I0112 13:59:22.427541  4605 solver.cpp:397]     Test net output #1: loss = 0.050619 (* 1 = 0.050619 loss)
I0112 13:59:22.478123  4605 solver.cpp:218] Iteration 2500 (13.2072 iter/s, 7.57164s/100 iters), loss = 0.0319451
I0112 13:59:22.478199  4605 solver.cpp:237]     Train net output #0: loss = 0.0319451 (* 1 = 0.0319451 loss)
I0112 13:59:22.478215  4605 sgd_solver.cpp:105] Iteration 2500, lr = 0.00845897
I0112 13:59:27.693603  4605 solver.cpp:218] Iteration 2600 (19.1742 iter/s, 5.21535s/100 iters), loss = 0.0723398
I0112 13:59:27.693655  4605 solver.cpp:237]     Train net output #0: loss = 0.0723398 (* 1 = 0.0723398 loss)
I0112 13:59:27.693667  4605 sgd_solver.cpp:105] Iteration 2600, lr = 0.00840857
I0112 13:59:32.893954  4605 solver.cpp:218] Iteration 2700 (19.2299 iter/s, 5.20024s/100 iters), loss = 0.0818715
I0112 13:59:32.894017  4605 solver.cpp:237]     Train net output #0: loss = 0.0818715 (* 1 = 0.0818715 loss)
I0112 13:59:32.894032  4605 sgd_solver.cpp:105] Iteration 2700, lr = 0.00835886
I0112 13:59:38.094995  4605 solver.cpp:218] Iteration 2800 (19.2273 iter/s, 5.20093s/100 iters), loss = 0.00166001
I0112 13:59:38.095055  4605 solver.cpp:237]     Train net output #0: loss = 0.00165998 (* 1 = 0.00165998 loss)
I0112 13:59:38.095067  4605 sgd_solver.cpp:105] Iteration 2800, lr = 0.00830984
I0112 13:59:38.512676  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:59:43.295424  4605 solver.cpp:218] Iteration 2900 (19.2296 iter/s, 5.20033s/100 iters), loss = 0.0198968
I0112 13:59:43.295598  4605 solver.cpp:237]     Train net output #0: loss = 0.0198968 (* 1 = 0.0198968 loss)
I0112 13:59:43.295616  4605 sgd_solver.cpp:105] Iteration 2900, lr = 0.00826148
I0112 13:59:48.410835  4605 solver.cpp:330] Iteration 3000, Testing net (#0)
I0112 13:59:50.717015  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 13:59:50.810612  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9867
I0112 13:59:50.810650  4605 solver.cpp:397]     Test net output #1: loss = 0.0373926 (* 1 = 0.0373926 loss)
I0112 13:59:50.861148  4605 solver.cpp:218] Iteration 3000 (13.218 iter/s, 7.56547s/100 iters), loss = 0.00747836
I0112 13:59:50.861227  4605 solver.cpp:237]     Train net output #0: loss = 0.00747831 (* 1 = 0.00747831 loss)
I0112 13:59:50.861239  4605 sgd_solver.cpp:105] Iteration 3000, lr = 0.00821377
I0112 13:59:56.060351  4605 solver.cpp:218] Iteration 3100 (19.2342 iter/s, 5.19908s/100 iters), loss = 0.00772881
I0112 13:59:56.060405  4605 solver.cpp:237]     Train net output #0: loss = 0.00772875 (* 1 = 0.00772875 loss)
I0112 13:59:56.060427  4605 sgd_solver.cpp:105] Iteration 3100, lr = 0.0081667
I0112 14:00:01.275904  4605 solver.cpp:218] Iteration 3200 (19.1739 iter/s, 5.21542s/100 iters), loss = 0.00692212
I0112 14:00:01.276000  4605 solver.cpp:237]     Train net output #0: loss = 0.00692206 (* 1 = 0.00692206 loss)
I0112 14:00:01.276022  4605 sgd_solver.cpp:105] Iteration 3200, lr = 0.00812025
I0112 14:00:06.483846  4605 solver.cpp:218] Iteration 3300 (19.202 iter/s, 5.20779s/100 iters), loss = 0.0332235
I0112 14:00:06.483925  4605 solver.cpp:237]     Train net output #0: loss = 0.0332235 (* 1 = 0.0332235 loss)
I0112 14:00:06.483939  4605 sgd_solver.cpp:105] Iteration 3300, lr = 0.00807442
I0112 14:00:11.688371  4605 solver.cpp:218] Iteration 3400 (19.2145 iter/s, 5.20439s/100 iters), loss = 0.0107628
I0112 14:00:11.688437  4605 solver.cpp:237]     Train net output #0: loss = 0.0107627 (* 1 = 0.0107627 loss)
I0112 14:00:11.688452  4605 sgd_solver.cpp:105] Iteration 3400, lr = 0.00802918
I0112 14:00:16.806285  4605 solver.cpp:330] Iteration 3500, Testing net (#0)
I0112 14:00:19.114704  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:00:19.208638  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9853
I0112 14:00:19.208679  4605 solver.cpp:397]     Test net output #1: loss = 0.0436789 (* 1 = 0.0436789 loss)
I0112 14:00:19.259240  4605 solver.cpp:218] Iteration 3500 (13.2088 iter/s, 7.57073s/100 iters), loss = 0.00571759
I0112 14:00:19.259318  4605 solver.cpp:237]     Train net output #0: loss = 0.00571754 (* 1 = 0.00571754 loss)
I0112 14:00:19.259337  4605 sgd_solver.cpp:105] Iteration 3500, lr = 0.00798454
I0112 14:00:24.462276  4605 solver.cpp:218] Iteration 3600 (19.2201 iter/s, 5.2029s/100 iters), loss = 0.0379283
I0112 14:00:24.462355  4605 solver.cpp:237]     Train net output #0: loss = 0.0379283 (* 1 = 0.0379283 loss)
I0112 14:00:24.462369  4605 sgd_solver.cpp:105] Iteration 3600, lr = 0.00794046
I0112 14:00:29.667263  4605 solver.cpp:218] Iteration 3700 (19.2128 iter/s, 5.20486s/100 iters), loss = 0.0133145
I0112 14:00:29.667327  4605 solver.cpp:237]     Train net output #0: loss = 0.0133144 (* 1 = 0.0133144 loss)
I0112 14:00:29.667342  4605 sgd_solver.cpp:105] Iteration 3700, lr = 0.00789695
I0112 14:00:32.008819  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:00:34.870044  4605 solver.cpp:218] Iteration 3800 (19.2209 iter/s, 5.20267s/100 iters), loss = 0.00914176
I0112 14:00:34.870115  4605 solver.cpp:237]     Train net output #0: loss = 0.00914171 (* 1 = 0.00914171 loss)
I0112 14:00:34.870128  4605 sgd_solver.cpp:105] Iteration 3800, lr = 0.007854
I0112 14:00:40.072796  4605 solver.cpp:218] Iteration 3900 (19.221 iter/s, 5.20264s/100 iters), loss = 0.015734
I0112 14:00:40.072865  4605 solver.cpp:237]     Train net output #0: loss = 0.0157339 (* 1 = 0.0157339 loss)
I0112 14:00:40.072880  4605 sgd_solver.cpp:105] Iteration 3900, lr = 0.00781158
I0112 14:00:45.190907  4605 solver.cpp:330] Iteration 4000, Testing net (#0)
I0112 14:00:47.501956  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:00:47.595811  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9899
I0112 14:00:47.595860  4605 solver.cpp:397]     Test net output #1: loss = 0.029029 (* 1 = 0.029029 loss)
I0112 14:00:47.646351  4605 solver.cpp:218] Iteration 4000 (13.2041 iter/s, 7.57341s/100 iters), loss = 0.0225254
I0112 14:00:47.646433  4605 solver.cpp:237]     Train net output #0: loss = 0.0225254 (* 1 = 0.0225254 loss)
I0112 14:00:47.646450  4605 sgd_solver.cpp:105] Iteration 4000, lr = 0.0077697
I0112 14:00:52.850426  4605 solver.cpp:218] Iteration 4100 (19.2162 iter/s, 5.20394s/100 iters), loss = 0.017278
I0112 14:00:52.850503  4605 solver.cpp:237]     Train net output #0: loss = 0.017278 (* 1 = 0.017278 loss)
I0112 14:00:52.850517  4605 sgd_solver.cpp:105] Iteration 4100, lr = 0.00772833
I0112 14:00:58.057740  4605 solver.cpp:218] Iteration 4200 (19.2042 iter/s, 5.20719s/100 iters), loss = 0.0154973
I0112 14:00:58.057804  4605 solver.cpp:237]     Train net output #0: loss = 0.0154973 (* 1 = 0.0154973 loss)
I0112 14:00:58.057817  4605 sgd_solver.cpp:105] Iteration 4200, lr = 0.00768748
I0112 14:01:03.262459  4605 solver.cpp:218] Iteration 4300 (19.2137 iter/s, 5.20461s/100 iters), loss = 0.0672259
I0112 14:01:03.262518  4605 solver.cpp:237]     Train net output #0: loss = 0.0672259 (* 1 = 0.0672259 loss)
I0112 14:01:03.262534  4605 sgd_solver.cpp:105] Iteration 4300, lr = 0.00764712
I0112 14:01:08.466617  4605 solver.cpp:218] Iteration 4400 (19.2158 iter/s, 5.20404s/100 iters), loss = 0.0157733
I0112 14:01:08.466697  4605 solver.cpp:237]     Train net output #0: loss = 0.0157734 (* 1 = 0.0157734 loss)
I0112 14:01:08.466714  4605 sgd_solver.cpp:105] Iteration 4400, lr = 0.00760726
I0112 14:01:13.588131  4605 solver.cpp:330] Iteration 4500, Testing net (#0)
I0112 14:01:15.898298  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:01:15.991850  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9867
I0112 14:01:15.991889  4605 solver.cpp:397]     Test net output #1: loss = 0.0401761 (* 1 = 0.0401761 loss)
I0112 14:01:16.042362  4605 solver.cpp:218] Iteration 4500 (13.2003 iter/s, 7.57556s/100 iters), loss = 0.00598018
I0112 14:01:16.042477  4605 solver.cpp:237]     Train net output #0: loss = 0.00598019 (* 1 = 0.00598019 loss)
I0112 14:01:16.042500  4605 sgd_solver.cpp:105] Iteration 4500, lr = 0.00756788
I0112 14:01:21.251485  4605 solver.cpp:218] Iteration 4600 (19.1977 iter/s, 5.20895s/100 iters), loss = 0.014779
I0112 14:01:21.251601  4605 solver.cpp:237]     Train net output #0: loss = 0.014779 (* 1 = 0.014779 loss)
I0112 14:01:21.251623  4605 sgd_solver.cpp:105] Iteration 4600, lr = 0.00752897
I0112 14:01:25.575083  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:01:26.457192  4605 solver.cpp:218] Iteration 4700 (19.2103 iter/s, 5.20553s/100 iters), loss = 0.00893127
I0112 14:01:26.457307  4605 solver.cpp:237]     Train net output #0: loss = 0.00893128 (* 1 = 0.00893128 loss)
I0112 14:01:26.457329  4605 sgd_solver.cpp:105] Iteration 4700, lr = 0.00749052
I0112 14:01:31.663359  4605 solver.cpp:218] Iteration 4800 (19.2086 iter/s, 5.206s/100 iters), loss = 0.0135461
I0112 14:01:31.663432  4605 solver.cpp:237]     Train net output #0: loss = 0.0135461 (* 1 = 0.0135461 loss)
I0112 14:01:31.663452  4605 sgd_solver.cpp:105] Iteration 4800, lr = 0.00745253
I0112 14:01:36.866758  4605 solver.cpp:218] Iteration 4900 (19.2187 iter/s, 5.20327s/100 iters), loss = 0.00427028
I0112 14:01:36.866837  4605 solver.cpp:237]     Train net output #0: loss = 0.00427027 (* 1 = 0.00427027 loss)
I0112 14:01:36.866849  4605 sgd_solver.cpp:105] Iteration 4900, lr = 0.00741498
I0112 14:01:41.992313  4605 solver.cpp:447] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodel
I0112 14:01:42.035707  4605 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstate
I0112 14:01:42.039849  4605 solver.cpp:330] Iteration 5000, Testing net (#0)
I0112 14:01:44.312611  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:01:44.406785  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9902
I0112 14:01:44.406824  4605 solver.cpp:397]     Test net output #1: loss = 0.0296651 (* 1 = 0.0296651 loss)
I0112 14:01:44.457312  4605 solver.cpp:218] Iteration 5000 (13.1745 iter/s, 7.5904s/100 iters), loss = 0.0190112
I0112 14:01:44.457389  4605 solver.cpp:237]     Train net output #0: loss = 0.0190112 (* 1 = 0.0190112 loss)
I0112 14:01:44.457402  4605 sgd_solver.cpp:105] Iteration 5000, lr = 0.00737788
I0112 14:01:49.656071  4605 solver.cpp:218] Iteration 5100 (19.2358 iter/s, 5.19864s/100 iters), loss = 0.0170202
I0112 14:01:49.656131  4605 solver.cpp:237]     Train net output #0: loss = 0.0170202 (* 1 = 0.0170202 loss)
I0112 14:01:49.656143  4605 sgd_solver.cpp:105] Iteration 5100, lr = 0.0073412
I0112 14:01:54.855412  4605 solver.cpp:218] Iteration 5200 (19.2336 iter/s, 5.19924s/100 iters), loss = 0.00894152
I0112 14:01:54.855540  4605 solver.cpp:237]     Train net output #0: loss = 0.00894146 (* 1 = 0.00894146 loss)
I0112 14:01:54.855554  4605 sgd_solver.cpp:105] Iteration 5200, lr = 0.00730495
I0112 14:02:00.054531  4605 solver.cpp:218] Iteration 5300 (19.2346 iter/s, 5.19895s/100 iters), loss = 0.000942099
I0112 14:02:00.054582  4605 solver.cpp:237]     Train net output #0: loss = 0.000942038 (* 1 = 0.000942038 loss)
I0112 14:02:00.054595  4605 sgd_solver.cpp:105] Iteration 5300, lr = 0.00726911
I0112 14:02:05.254628  4605 solver.cpp:218] Iteration 5400 (19.2308 iter/s, 5.2s/100 iters), loss = 0.0121031
I0112 14:02:05.254685  4605 solver.cpp:237]     Train net output #0: loss = 0.0121031 (* 1 = 0.0121031 loss)
I0112 14:02:05.254699  4605 sgd_solver.cpp:105] Iteration 5400, lr = 0.00723368
I0112 14:02:10.368352  4605 solver.cpp:330] Iteration 5500, Testing net (#0)
I0112 14:02:12.675007  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:02:12.768986  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9892
I0112 14:02:12.769026  4605 solver.cpp:397]     Test net output #1: loss = 0.031965 (* 1 = 0.031965 loss)
I0112 14:02:12.819501  4605 solver.cpp:218] Iteration 5500 (13.2192 iter/s, 7.56475s/100 iters), loss = 0.01058
I0112 14:02:12.819579  4605 solver.cpp:237]     Train net output #0: loss = 0.01058 (* 1 = 0.01058 loss)
I0112 14:02:12.819593  4605 sgd_solver.cpp:105] Iteration 5500, lr = 0.00719865
I0112 14:02:18.017997  4605 solver.cpp:218] Iteration 5600 (19.2368 iter/s, 5.19837s/100 iters), loss = 0.000863581
I0112 14:02:18.018057  4605 solver.cpp:237]     Train net output #0: loss = 0.000863529 (* 1 = 0.000863529 loss)
I0112 14:02:18.018071  4605 sgd_solver.cpp:105] Iteration 5600, lr = 0.00716402
I0112 14:02:19.059535  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:02:23.215595  4605 solver.cpp:218] Iteration 5700 (19.24 iter/s, 5.1975s/100 iters), loss = 0.00275054
I0112 14:02:23.215651  4605 solver.cpp:237]     Train net output #0: loss = 0.0027505 (* 1 = 0.0027505 loss)
I0112 14:02:23.215662  4605 sgd_solver.cpp:105] Iteration 5700, lr = 0.00712977
I0112 14:02:28.416524  4605 solver.cpp:218] Iteration 5800 (19.2277 iter/s, 5.20083s/100 iters), loss = 0.0391812
I0112 14:02:28.416708  4605 solver.cpp:237]     Train net output #0: loss = 0.0391811 (* 1 = 0.0391811 loss)
I0112 14:02:28.416728  4605 sgd_solver.cpp:105] Iteration 5800, lr = 0.0070959
I0112 14:02:33.617794  4605 solver.cpp:218] Iteration 5900 (19.2269 iter/s, 5.20105s/100 iters), loss = 0.00609078
I0112 14:02:33.617852  4605 solver.cpp:237]     Train net output #0: loss = 0.00609072 (* 1 = 0.00609072 loss)
I0112 14:02:33.617866  4605 sgd_solver.cpp:105] Iteration 5900, lr = 0.0070624
I0112 14:02:38.732347  4605 solver.cpp:330] Iteration 6000, Testing net (#0)
I0112 14:02:41.039494  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:02:41.133369  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9904
I0112 14:02:41.133410  4605 solver.cpp:397]     Test net output #1: loss = 0.0272586 (* 1 = 0.0272586 loss)
I0112 14:02:41.183835  4605 solver.cpp:218] Iteration 6000 (13.2172 iter/s, 7.56592s/100 iters), loss = 0.00282902
I0112 14:02:41.183910  4605 solver.cpp:237]     Train net output #0: loss = 0.00282896 (* 1 = 0.00282896 loss)
I0112 14:02:41.183924  4605 sgd_solver.cpp:105] Iteration 6000, lr = 0.00702927
I0112 14:02:46.383450  4605 solver.cpp:218] Iteration 6100 (19.2327 iter/s, 5.19949s/100 iters), loss = 0.00361349
I0112 14:02:46.383522  4605 solver.cpp:237]     Train net output #0: loss = 0.00361343 (* 1 = 0.00361343 loss)
I0112 14:02:46.383538  4605 sgd_solver.cpp:105] Iteration 6100, lr = 0.0069965
I0112 14:02:51.583528  4605 solver.cpp:218] Iteration 6200 (19.2309 iter/s, 5.19997s/100 iters), loss = 0.00890321
I0112 14:02:51.583590  4605 solver.cpp:237]     Train net output #0: loss = 0.00890316 (* 1 = 0.00890316 loss)
I0112 14:02:51.583602  4605 sgd_solver.cpp:105] Iteration 6200, lr = 0.00696408
I0112 14:02:56.782480  4605 solver.cpp:218] Iteration 6300 (19.235 iter/s, 5.19885s/100 iters), loss = 0.00851892
I0112 14:02:56.782539  4605 solver.cpp:237]     Train net output #0: loss = 0.00851887 (* 1 = 0.00851887 loss)
I0112 14:02:56.782553  4605 sgd_solver.cpp:105] Iteration 6300, lr = 0.00693201
I0112 14:03:01.982347  4605 solver.cpp:218] Iteration 6400 (19.2316 iter/s, 5.19977s/100 iters), loss = 0.00671306
I0112 14:03:01.982455  4605 solver.cpp:237]     Train net output #0: loss = 0.00671302 (* 1 = 0.00671302 loss)
I0112 14:03:01.982470  4605 sgd_solver.cpp:105] Iteration 6400, lr = 0.00690029
I0112 14:03:07.094683  4605 solver.cpp:330] Iteration 6500, Testing net (#0)
I0112 14:03:09.401098  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:03:09.494998  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9898
I0112 14:03:09.495038  4605 solver.cpp:397]     Test net output #1: loss = 0.0315893 (* 1 = 0.0315893 loss)
I0112 14:03:09.545534  4605 solver.cpp:218] Iteration 6500 (13.2222 iter/s, 7.56302s/100 iters), loss = 0.0187188
I0112 14:03:09.545611  4605 solver.cpp:237]     Train net output #0: loss = 0.0187188 (* 1 = 0.0187188 loss)
I0112 14:03:09.545625  4605 sgd_solver.cpp:105] Iteration 6500, lr = 0.0068689
I0112 14:03:12.562772  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:03:14.745219  4605 solver.cpp:218] Iteration 6600 (19.2324 iter/s, 5.19956s/100 iters), loss = 0.0217157
I0112 14:03:14.745277  4605 solver.cpp:237]     Train net output #0: loss = 0.0217157 (* 1 = 0.0217157 loss)
I0112 14:03:14.745290  4605 sgd_solver.cpp:105] Iteration 6600, lr = 0.00683784
I0112 14:03:19.943637  4605 solver.cpp:218] Iteration 6700 (19.237 iter/s, 5.19833s/100 iters), loss = 0.00719072
I0112 14:03:19.943696  4605 solver.cpp:237]     Train net output #0: loss = 0.0071907 (* 1 = 0.0071907 loss)
I0112 14:03:19.943711  4605 sgd_solver.cpp:105] Iteration 6700, lr = 0.00680711
I0112 14:03:25.152591  4605 solver.cpp:218] Iteration 6800 (19.1981 iter/s, 5.20884s/100 iters), loss = 0.00298378
I0112 14:03:25.152668  4605 solver.cpp:237]     Train net output #0: loss = 0.00298375 (* 1 = 0.00298375 loss)
I0112 14:03:25.152689  4605 sgd_solver.cpp:105] Iteration 6800, lr = 0.0067767
I0112 14:03:30.365628  4605 solver.cpp:218] Iteration 6900 (19.1831 iter/s, 5.21291s/100 iters), loss = 0.00810112
I0112 14:03:30.365710  4605 solver.cpp:237]     Train net output #0: loss = 0.00810109 (* 1 = 0.00810109 loss)
I0112 14:03:30.365727  4605 sgd_solver.cpp:105] Iteration 6900, lr = 0.0067466
I0112 14:03:35.493791  4605 solver.cpp:330] Iteration 7000, Testing net (#0)
I0112 14:03:37.809173  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:03:37.903611  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9908
I0112 14:03:37.903656  4605 solver.cpp:397]     Test net output #1: loss = 0.0282443 (* 1 = 0.0282443 loss)
I0112 14:03:37.954181  4605 solver.cpp:218] Iteration 7000 (13.178 iter/s, 7.58842s/100 iters), loss = 0.0118699
I0112 14:03:37.954258  4605 solver.cpp:237]     Train net output #0: loss = 0.0118699 (* 1 = 0.0118699 loss)
I0112 14:03:37.954277  4605 sgd_solver.cpp:105] Iteration 7000, lr = 0.00671681
I0112 14:03:43.163439  4605 solver.cpp:218] Iteration 7100 (19.197 iter/s, 5.20914s/100 iters), loss = 0.00973831
I0112 14:03:43.163498  4605 solver.cpp:237]     Train net output #0: loss = 0.00973828 (* 1 = 0.00973828 loss)
I0112 14:03:43.163511  4605 sgd_solver.cpp:105] Iteration 7100, lr = 0.00668733
I0112 14:03:48.370353  4605 solver.cpp:218] Iteration 7200 (19.2056 iter/s, 5.20682s/100 iters), loss = 0.00335247
I0112 14:03:48.370426  4605 solver.cpp:237]     Train net output #0: loss = 0.00335244 (* 1 = 0.00335244 loss)
I0112 14:03:48.370446  4605 sgd_solver.cpp:105] Iteration 7200, lr = 0.00665815
I0112 14:03:53.576642  4605 solver.cpp:218] Iteration 7300 (19.2079 iter/s, 5.20618s/100 iters), loss = 0.0219852
I0112 14:03:53.576716  4605 solver.cpp:237]     Train net output #0: loss = 0.0219852 (* 1 = 0.0219852 loss)
I0112 14:03:53.576733  4605 sgd_solver.cpp:105] Iteration 7300, lr = 0.00662927
I0112 14:03:58.785342  4605 solver.cpp:218] Iteration 7400 (19.199 iter/s, 5.2086s/100 iters), loss = 0.00455252
I0112 14:03:58.785408  4605 solver.cpp:237]     Train net output #0: loss = 0.00455249 (* 1 = 0.00455249 loss)
I0112 14:03:58.785421  4605 sgd_solver.cpp:105] Iteration 7400, lr = 0.00660067
I0112 14:04:03.736567  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:04:03.910609  4605 solver.cpp:330] Iteration 7500, Testing net (#0)
I0112 14:04:06.219141  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:04:06.312794  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9891
I0112 14:04:06.312847  4605 solver.cpp:397]     Test net output #1: loss = 0.0323302 (* 1 = 0.0323302 loss)
I0112 14:04:06.363332  4605 solver.cpp:218] Iteration 7500 (13.1963 iter/s, 7.57787s/100 iters), loss = 0.00170278
I0112 14:04:06.363407  4605 solver.cpp:237]     Train net output #0: loss = 0.00170275 (* 1 = 0.00170275 loss)
I0112 14:04:06.363432  4605 sgd_solver.cpp:105] Iteration 7500, lr = 0.00657236
I0112 14:04:11.567973  4605 solver.cpp:218] Iteration 7600 (19.2141 iter/s, 5.20452s/100 iters), loss = 0.00527642
I0112 14:04:11.568038  4605 solver.cpp:237]     Train net output #0: loss = 0.00527638 (* 1 = 0.00527638 loss)
I0112 14:04:11.568053  4605 sgd_solver.cpp:105] Iteration 7600, lr = 0.00654433
I0112 14:04:16.778379  4605 solver.cpp:218] Iteration 7700 (19.1928 iter/s, 5.21029s/100 iters), loss = 0.0344356
I0112 14:04:16.778457  4605 solver.cpp:237]     Train net output #0: loss = 0.0344355 (* 1 = 0.0344355 loss)
I0112 14:04:16.778471  4605 sgd_solver.cpp:105] Iteration 7700, lr = 0.00651658
I0112 14:04:21.987366  4605 solver.cpp:218] Iteration 7800 (19.1981 iter/s, 5.20886s/100 iters), loss = 0.00238596
I0112 14:04:21.987444  4605 solver.cpp:237]     Train net output #0: loss = 0.0023859 (* 1 = 0.0023859 loss)
I0112 14:04:21.987462  4605 sgd_solver.cpp:105] Iteration 7800, lr = 0.00648911
I0112 14:04:27.199033  4605 solver.cpp:218] Iteration 7900 (19.1882 iter/s, 5.21154s/100 iters), loss = 0.00611803
I0112 14:04:27.199168  4605 solver.cpp:237]     Train net output #0: loss = 0.00611798 (* 1 = 0.00611798 loss)
I0112 14:04:27.199187  4605 sgd_solver.cpp:105] Iteration 7900, lr = 0.0064619
I0112 14:04:32.325093  4605 solver.cpp:330] Iteration 8000, Testing net (#0)
I0112 14:04:34.645023  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:04:34.738813  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9903
I0112 14:04:34.738867  4605 solver.cpp:397]     Test net output #1: loss = 0.0289319 (* 1 = 0.0289319 loss)
I0112 14:04:34.789491  4605 solver.cpp:218] Iteration 8000 (13.1748 iter/s, 7.59026s/100 iters), loss = 0.00690512
I0112 14:04:34.789590  4605 solver.cpp:237]     Train net output #0: loss = 0.00690506 (* 1 = 0.00690506 loss)
I0112 14:04:34.789618  4605 sgd_solver.cpp:105] Iteration 8000, lr = 0.00643496
I0112 14:04:39.997692  4605 solver.cpp:218] Iteration 8100 (19.201 iter/s, 5.20807s/100 iters), loss = 0.0260982
I0112 14:04:39.997867  4605 solver.cpp:237]     Train net output #0: loss = 0.0260982 (* 1 = 0.0260982 loss)
I0112 14:04:39.997889  4605 sgd_solver.cpp:105] Iteration 8100, lr = 0.00640827
I0112 14:04:45.208073  4605 solver.cpp:218] Iteration 8200 (19.1932 iter/s, 5.21017s/100 iters), loss = 0.00465579
I0112 14:04:45.208140  4605 solver.cpp:237]     Train net output #0: loss = 0.00465573 (* 1 = 0.00465573 loss)
I0112 14:04:45.208156  4605 sgd_solver.cpp:105] Iteration 8200, lr = 0.00638185
I0112 14:04:50.419154  4605 solver.cpp:218] Iteration 8300 (19.1903 iter/s, 5.21097s/100 iters), loss = 0.0238528
I0112 14:04:50.419225  4605 solver.cpp:237]     Train net output #0: loss = 0.0238527 (* 1 = 0.0238527 loss)
I0112 14:04:50.419245  4605 sgd_solver.cpp:105] Iteration 8300, lr = 0.00635567
I0112 14:04:55.623467  4605 solver.cpp:218] Iteration 8400 (19.2152 iter/s, 5.20421s/100 iters), loss = 0.011262
I0112 14:04:55.623533  4605 solver.cpp:237]     Train net output #0: loss = 0.011262 (* 1 = 0.011262 loss)
I0112 14:04:55.623548  4605 sgd_solver.cpp:105] Iteration 8400, lr = 0.00632975
I0112 14:04:57.341358  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:05:00.740424  4605 solver.cpp:330] Iteration 8500, Testing net (#0)
I0112 14:05:03.046005  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:05:03.140305  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9902
I0112 14:05:03.140347  4605 solver.cpp:397]     Test net output #1: loss = 0.0298106 (* 1 = 0.0298106 loss)
I0112 14:05:03.190874  4605 solver.cpp:218] Iteration 8500 (13.2148 iter/s, 7.56729s/100 iters), loss = 0.00609464
I0112 14:05:03.190951  4605 solver.cpp:237]     Train net output #0: loss = 0.00609459 (* 1 = 0.00609459 loss)
I0112 14:05:03.190965  4605 sgd_solver.cpp:105] Iteration 8500, lr = 0.00630407
I0112 14:05:08.389690  4605 solver.cpp:218] Iteration 8600 (19.2356 iter/s, 5.1987s/100 iters), loss = 0.000793173
I0112 14:05:08.389744  4605 solver.cpp:237]     Train net output #0: loss = 0.000793116 (* 1 = 0.000793116 loss)
I0112 14:05:08.389755  4605 sgd_solver.cpp:105] Iteration 8600, lr = 0.00627864
I0112 14:05:13.589684  4605 solver.cpp:218] Iteration 8700 (19.2311 iter/s, 5.1999s/100 iters), loss = 0.0026464
I0112 14:05:13.589817  4605 solver.cpp:237]     Train net output #0: loss = 0.00264634 (* 1 = 0.00264634 loss)
I0112 14:05:13.589830  4605 sgd_solver.cpp:105] Iteration 8700, lr = 0.00625344
I0112 14:05:18.790073  4605 solver.cpp:218] Iteration 8800 (19.23 iter/s, 5.20021s/100 iters), loss = 0.00182293
I0112 14:05:18.790135  4605 solver.cpp:237]     Train net output #0: loss = 0.00182288 (* 1 = 0.00182288 loss)
I0112 14:05:18.790148  4605 sgd_solver.cpp:105] Iteration 8800, lr = 0.00622847
I0112 14:05:23.994278  4605 solver.cpp:218] Iteration 8900 (19.2156 iter/s, 5.2041s/100 iters), loss = 0.000394624
I0112 14:05:23.994349  4605 solver.cpp:237]     Train net output #0: loss = 0.00039457 (* 1 = 0.00039457 loss)
I0112 14:05:23.994364  4605 sgd_solver.cpp:105] Iteration 8900, lr = 0.00620374
I0112 14:05:29.116060  4605 solver.cpp:330] Iteration 9000, Testing net (#0)
I0112 14:05:31.432765  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:05:31.526996  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9903
I0112 14:05:31.527047  4605 solver.cpp:397]     Test net output #1: loss = 0.0287546 (* 1 = 0.0287546 loss)
I0112 14:05:31.577587  4605 solver.cpp:218] Iteration 9000 (13.1871 iter/s, 7.58318s/100 iters), loss = 0.018936
I0112 14:05:31.577656  4605 solver.cpp:237]     Train net output #0: loss = 0.0189359 (* 1 = 0.0189359 loss)
I0112 14:05:31.577672  4605 sgd_solver.cpp:105] Iteration 9000, lr = 0.00617924
I0112 14:05:36.784974  4605 solver.cpp:218] Iteration 9100 (19.2039 iter/s, 5.20727s/100 iters), loss = 0.00699512
I0112 14:05:36.785063  4605 solver.cpp:237]     Train net output #0: loss = 0.00699507 (* 1 = 0.00699507 loss)
I0112 14:05:36.785086  4605 sgd_solver.cpp:105] Iteration 9100, lr = 0.00615496
I0112 14:05:41.987784  4605 solver.cpp:218] Iteration 9200 (19.2209 iter/s, 5.20268s/100 iters), loss = 0.00425531
I0112 14:05:41.987859  4605 solver.cpp:237]     Train net output #0: loss = 0.00425526 (* 1 = 0.00425526 loss)
I0112 14:05:41.987876  4605 sgd_solver.cpp:105] Iteration 9200, lr = 0.0061309
I0112 14:05:47.193492  4605 solver.cpp:218] Iteration 9300 (19.2101 iter/s, 5.2056s/100 iters), loss = 0.00811363
I0112 14:05:47.193666  4605 solver.cpp:237]     Train net output #0: loss = 0.00811357 (* 1 = 0.00811357 loss)
I0112 14:05:47.193682  4605 sgd_solver.cpp:105] Iteration 9300, lr = 0.00610706
I0112 14:05:50.841812  4616 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:05:52.401444  4605 solver.cpp:218] Iteration 9400 (19.2022 iter/s, 5.20774s/100 iters), loss = 0.0287375
I0112 14:05:52.401510  4605 solver.cpp:237]     Train net output #0: loss = 0.0287374 (* 1 = 0.0287374 loss)
I0112 14:05:52.401528  4605 sgd_solver.cpp:105] Iteration 9400, lr = 0.00608343
I0112 14:05:57.522619  4605 solver.cpp:330] Iteration 9500, Testing net (#0)
I0112 14:05:59.834517  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:05:59.928700  4605 solver.cpp:397]     Test net output #0: accuracy = 0.9882
I0112 14:05:59.928741  4605 solver.cpp:397]     Test net output #1: loss = 0.0368194 (* 1 = 0.0368194 loss)
I0112 14:05:59.979233  4605 solver.cpp:218] Iteration 9500 (13.1967 iter/s, 7.57766s/100 iters), loss = 0.0047314
I0112 14:05:59.979310  4605 solver.cpp:237]     Train net output #0: loss = 0.00473134 (* 1 = 0.00473134 loss)
I0112 14:05:59.979342  4605 sgd_solver.cpp:105] Iteration 9500, lr = 0.00606002
I0112 14:06:05.184339  4605 solver.cpp:218] Iteration 9600 (19.2123 iter/s, 5.20499s/100 iters), loss = 0.00195558
I0112 14:06:05.184397  4605 solver.cpp:237]     Train net output #0: loss = 0.00195552 (* 1 = 0.00195552 loss)
I0112 14:06:05.184412  4605 sgd_solver.cpp:105] Iteration 9600, lr = 0.00603682
I0112 14:06:10.397572  4605 solver.cpp:218] Iteration 9700 (19.1823 iter/s, 5.21314s/100 iters), loss = 0.00291929
I0112 14:06:10.397629  4605 solver.cpp:237]     Train net output #0: loss = 0.00291923 (* 1 = 0.00291923 loss)
I0112 14:06:10.397644  4605 sgd_solver.cpp:105] Iteration 9700, lr = 0.00601382
I0112 14:06:15.601755  4605 solver.cpp:218] Iteration 9800 (19.2157 iter/s, 5.20408s/100 iters), loss = 0.0147848
I0112 14:06:15.601830  4605 solver.cpp:237]     Train net output #0: loss = 0.0147848 (* 1 = 0.0147848 loss)
I0112 14:06:15.601847  4605 sgd_solver.cpp:105] Iteration 9800, lr = 0.00599102
I0112 14:06:20.808406  4605 solver.cpp:218] Iteration 9900 (19.2066 iter/s, 5.20655s/100 iters), loss = 0.00615287
I0112 14:06:20.808533  4605 solver.cpp:237]     Train net output #0: loss = 0.00615281 (* 1 = 0.00615281 loss)
I0112 14:06:20.808552  4605 sgd_solver.cpp:105] Iteration 9900, lr = 0.00596843
I0112 14:06:25.933642  4605 solver.cpp:447] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0112 14:06:25.976433  4605 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0112 14:06:25.995990  4605 solver.cpp:310] Iteration 10000, loss = 0.00368586
I0112 14:06:25.996039  4605 solver.cpp:330] Iteration 10000, Testing net (#0)
I0112 14:06:28.273962  4617 data_layer.cpp:73] Restarting data prefetching from start.
I0112 14:06:28.368412  4605 solver.cpp:397]     Test net output #0: accuracy = 0.991
I0112 14:06:28.368463  4605 solver.cpp:397]     Test net output #1: loss = 0.0278615 (* 1 = 0.0278615 loss)
I0112 14:06:28.368472  4605 solver.cpp:315] Optimization Done.
I0112 14:06:28.368477  4605 caffe.cpp:259] Optimization Done.

训练完成


训练完成

可以看到accruacy平均成功率0.991

5. 获取mnist样本子库中的图片

get_images.py

#coding=utf-8

import struct
import numpy as np
import  matplotlib.pyplot as plt
import Image

#二进制的形式读入/home/fc/software/mnist/data/train-images-idx3-ubyte
filename='/home/fc/software/mnist/data/t10k-images-idx3-ubyte'
binfile=open(filename,'rb')
buf=binfile.read()

index=0
magic,numImages,numRows,numColumns=struct.unpack_from('>IIII',buf,index)
index+=struct.calcsize('>IIII')

#将每张图片按照格式存储到对应位置
for image in range(0,numImages):
    im=struct.unpack_from('>784B',buf,index)
    index+=struct.calcsize('>784B')
   #这里注意 Image对象的dtype是uint8,需要转换
    im=np.array(im,dtype='uint8')
    im=im.reshape(28,28)
   # fig=plt.figure()
   # plotwindow=fig.add_subplot(111)
   # plt.imshow(im,cmap='gray')
   # plt.show()
    im=Image.fromarray(im)
    im.save('./test-images/test-image_%s.bmp'%image,'bmp')

6.mnist手写测试

将之前取出的图片放到caffe/examples/images/下,编写如下代码进行测试:

#!/usr/bin/env python
#coding=utf-8

import os
import sys
import numpy as np 
import matplotlib.pyplot as plt 

import caffe

#设置caffe_root目录
caffe_root = '/home/fc/caffe/'  

#添加系统环境变量/home/fc/caffe/python
sys.path.insert(0,caffe_root + 'python')     


#指定LetNet网络文件
MODEL_FILE = '/home/fc/caffe/examples/mnist/lenet.prototxt'

#训练好的model文件
PRETRAINED = '/home/fc/caffe/examples/mnist/lenet_iter_10000.caffemodel'

#测试图片路径
IMAGE_FILE = '/home/fc/caffe/examples/images/test-image_0.bmp'


#caffe接口载入文件
input_image = caffe.io.load_image(IMAGE_FILE,color = False)

#载入LetNet分类器
net = caffe.Classifier(MODEL_FILE,PRETRAINED)

#预测图片,进行分类
prediction = net.predict([input_image],oversample = False)

caffe.set_mode_cpu()

print 'predicted class:' ,prediction[0].argmax()


运行后

$ ./mnist_test.py 
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0112 19:51:41.881881 10487 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W0112 19:51:41.881916 10487 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W0112 19:51:41.881930 10487 _caffe.cpp:142] Net('/home/fc/caffe/examples/mnist/lenet.prototxt', 1, weights='/home/fc/caffe/examples/mnist/lenet_iter_10000.caffemodel')
I0112 19:51:42.112020 10487 net.cpp:51] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
  level: 0
}
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 64
      dim: 1
      dim: 28
      dim: 28
    }
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "ip2"
  top: "prob"
}
I0112 19:51:42.112149 10487 layer_factory.hpp:77] Creating layer data
I0112 19:51:42.112165 10487 net.cpp:84] Creating Layer data
I0112 19:51:42.112174 10487 net.cpp:380] data -> data
I0112 19:51:42.112195 10487 net.cpp:122] Setting up data
I0112 19:51:42.112207 10487 net.cpp:129] Top shape: 64 1 28 28 (50176)
I0112 19:51:42.112212 10487 net.cpp:137] Memory required for data: 200704
I0112 19:51:42.112220 10487 layer_factory.hpp:77] Creating layer conv1
I0112 19:51:42.112233 10487 net.cpp:84] Creating Layer conv1
I0112 19:51:42.112239 10487 net.cpp:406] conv1 <- data
I0112 19:51:42.112247 10487 net.cpp:380] conv1 -> conv1
I0112 19:51:42.112313 10487 net.cpp:122] Setting up conv1
I0112 19:51:42.112325 10487 net.cpp:129] Top shape: 64 20 24 24 (737280)
I0112 19:51:42.112331 10487 net.cpp:137] Memory required for data: 3149824
I0112 19:51:42.112347 10487 layer_factory.hpp:77] Creating layer pool1
I0112 19:51:42.112356 10487 net.cpp:84] Creating Layer pool1
I0112 19:51:42.112362 10487 net.cpp:406] pool1 <- conv1
I0112 19:51:42.112370 10487 net.cpp:380] pool1 -> pool1
I0112 19:51:42.112385 10487 net.cpp:122] Setting up pool1
I0112 19:51:42.112392 10487 net.cpp:129] Top shape: 64 20 12 12 (184320)
I0112 19:51:42.112398 10487 net.cpp:137] Memory required for data: 3887104
I0112 19:51:42.112403 10487 layer_factory.hpp:77] Creating layer conv2
I0112 19:51:42.112412 10487 net.cpp:84] Creating Layer conv2
I0112 19:51:42.112418 10487 net.cpp:406] conv2 <- pool1
I0112 19:51:42.112426 10487 net.cpp:380] conv2 -> conv2
I0112 19:51:42.112623 10487 net.cpp:122] Setting up conv2
I0112 19:51:42.112634 10487 net.cpp:129] Top shape: 64 50 8 8 (204800)
I0112 19:51:42.112640 10487 net.cpp:137] Memory required for data: 4706304
I0112 19:51:42.112651 10487 layer_factory.hpp:77] Creating layer pool2
I0112 19:51:42.112661 10487 net.cpp:84] Creating Layer pool2
I0112 19:51:42.112668 10487 net.cpp:406] pool2 <- conv2
I0112 19:51:42.112675 10487 net.cpp:380] pool2 -> pool2
I0112 19:51:42.112686 10487 net.cpp:122] Setting up pool2
I0112 19:51:42.112694 10487 net.cpp:129] Top shape: 64 50 4 4 (51200)
I0112 19:51:42.112699 10487 net.cpp:137] Memory required for data: 4911104
I0112 19:51:42.112705 10487 layer_factory.hpp:77] Creating layer ip1
I0112 19:51:42.112715 10487 net.cpp:84] Creating Layer ip1
I0112 19:51:42.112721 10487 net.cpp:406] ip1 <- pool2
I0112 19:51:42.112730 10487 net.cpp:380] ip1 -> ip1
I0112 19:51:42.115571 10487 net.cpp:122] Setting up ip1
I0112 19:51:42.115591 10487 net.cpp:129] Top shape: 64 500 (32000)
I0112 19:51:42.115597 10487 net.cpp:137] Memory required for data: 5039104
I0112 19:51:42.115612 10487 layer_factory.hpp:77] Creating layer relu1
I0112 19:51:42.115622 10487 net.cpp:84] Creating Layer relu1
I0112 19:51:42.115628 10487 net.cpp:406] relu1 <- ip1
I0112 19:51:42.115635 10487 net.cpp:367] relu1 -> ip1 (in-place)
I0112 19:51:42.115645 10487 net.cpp:122] Setting up relu1
I0112 19:51:42.115653 10487 net.cpp:129] Top shape: 64 500 (32000)
I0112 19:51:42.115658 10487 net.cpp:137] Memory required for data: 5167104
I0112 19:51:42.115664 10487 layer_factory.hpp:77] Creating layer ip2
I0112 19:51:42.115671 10487 net.cpp:84] Creating Layer ip2
I0112 19:51:42.115676 10487 net.cpp:406] ip2 <- ip1
I0112 19:51:42.115685 10487 net.cpp:380] ip2 -> ip2
I0112 19:51:42.115737 10487 net.cpp:122] Setting up ip2
I0112 19:51:42.115746 10487 net.cpp:129] Top shape: 64 10 (640)
I0112 19:51:42.115751 10487 net.cpp:137] Memory required for data: 5169664
I0112 19:51:42.115759 10487 layer_factory.hpp:77] Creating layer prob
I0112 19:51:42.115768 10487 net.cpp:84] Creating Layer prob
I0112 19:51:42.115774 10487 net.cpp:406] prob <- ip2
I0112 19:51:42.115782 10487 net.cpp:380] prob -> prob
I0112 19:51:42.115794 10487 net.cpp:122] Setting up prob
I0112 19:51:42.115802 10487 net.cpp:129] Top shape: 64 10 (640)
I0112 19:51:42.115808 10487 net.cpp:137] Memory required for data: 5172224
I0112 19:51:42.115813 10487 net.cpp:200] prob does not need backward computation.
I0112 19:51:42.115819 10487 net.cpp:200] ip2 does not need backward computation.
I0112 19:51:42.115825 10487 net.cpp:200] relu1 does not need backward computation.
I0112 19:51:42.115830 10487 net.cpp:200] ip1 does not need backward computation.
I0112 19:51:42.115836 10487 net.cpp:200] pool2 does not need backward computation.
I0112 19:51:42.115842 10487 net.cpp:200] conv2 does not need backward computation.
I0112 19:51:42.115847 10487 net.cpp:200] pool1 does not need backward computation.
I0112 19:51:42.115854 10487 net.cpp:200] conv1 does not need backward computation.
I0112 19:51:42.115859 10487 net.cpp:200] data does not need backward computation.
I0112 19:51:42.115865 10487 net.cpp:242] This network produces output prob
I0112 19:51:42.115875 10487 net.cpp:255] Network initialization done.
I0112 19:51:42.117382 10487 net.cpp:744] Ignoring source layer mnist
I0112 19:51:42.117758 10487 net.cpp:744] Ignoring source layer loss
predicted class: 7

识别出的数字是7,成功的识别出数字。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,547评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,399评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,428评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,599评论 1 274
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,612评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,577评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,941评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,603评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,852评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,605评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,693评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,375评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,955评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,936评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,172评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,970评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,414评论 2 342