Caffe 使用总结 3:整体训练

1. Train

训练一个网络,我们需要创建一个solver,以caffenet为例:

solver = caffe.Solver(‘./models/bvlc_reference_caffenet/solver.prototxt’);

来看看这样一个solver.prototxt里面的定义:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
net: "models/bvlc_reference_caffenet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/bvlc_reference_caffenet/caffenet_train"
solver_mode: GPU

重点来了!第一行的net定义,不是我们之前用到的deploy文件,而是train_val了。这是因为deploy中只有纯净的网络结构定义。而在train_val中,基于网络结构定义,还需要指定每一层在train和val中的表现,以及数据的来源和preprocessing等等操作。这里暂时先不深入探究。

在matlab中,训练:

solver.solve();

solver.step(1000); %train for only 1000 iterations

更多操作参见
Caffe interfaces

Case study - Lenet

根据 Caffe lenet MNIST tutorial中的默认设置,训练lenet的solver定义为:

lenet_solver.prototxt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: GPU

从这份文件我们可以得到的信息:

  • 每train 500个iter,test一次;一共test 100 个iter,也就是说要train 50,000个iter
  • 一共test 100*100 = 10,000张图(遍历一遍test集,也就是一个epoch)
  • 通过查看lenet_train_test.prototxt可以知道:train的batch size是64,所以一共train了3200000张图。训练集大小为60,000,所以相当于训练了 53个epoch。 注:
    1. epoch:过完一次全部数据(训练集或者测试集)
    2. iteration:过一次batch size的量
  • 中间的一大部分都是关于优化设置,learning rate在以某种规定逐渐变小
  • snapshot是用来保存中间的产物。最后这个训练的结果,会有lenet_iter_5000.caffemodel和lenet_iter_10000.caffemodel两个文件