要点:
1. 与上一篇学习笔记一致,本学习笔记仍然讨论Kaggle的猫狗分类问题。不同之处在于,上一篇笔记使用的方法是传统的“卷积+池化+全连接层”形式,这一次将尝试利用transfer learning解决此问题,选用的pretrained model为VGG16
2. VGGNet是ILSVRC 2014的top model,由K. Simonyan and A. Zisserman提出,其架构简洁且经典,由一系列的卷积模块堆叠而成,最后与全连接层相连;其中,每一个卷积模块采用传统的“卷积+池化”结构;随着深度的增加,每一个卷积模块中的filter数量也在增加
3. transfer learning的具体应用可以粗略分为两种情况:
a. 如果pretrained model解决的问题与当前要解决的问题性质类似或一致(如猫狗分类),那么可以直接利用pretrained model的bottom layers进行feature extraction,只改写和训练全连接层即可,而且bottom layers在训练时需要保持冻结
b. 其他情况下,除了完成a之外,还需对bottom layers中的部分top layer解冻,进行再次训练,即fine tuning
4. VGG16的输入图像尺寸为224*224,因此代码中的图像尺寸统一为224*224
5. VGG16要求输入的图像必须是centered image,即标准化图像,也就是说:每一张图像每一个channel的像素要减去ImageNet训练数据集在对应channel上的平均像素值。代码层面可通过keras的preprocess_input函数实现,也可以手动传递平均像素值,手动传递时,需要将 ImageDataGenerator的featurewise_center设置为True
代码部分:
# 加载libraries
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# 设置文件路径
dir = os.getcwd()
train_dir = os.path.join(dir, 'train')
# 显示train文件夹下的猫狗图片
fig = plt.gcf()
fig.set_size_inches(10,10)
for i in range(9):
plt.subplot(330 + 1 + i)
file_name = train_dir + '\\dog\\dog.' + str(i) + '.jpg'
im = plt.imread(file_name)
plt.imshow(im)
fig = plt.gcf()
fig.set_size_inches(10,10)
for i in range(9):
plt.subplot(330 + 1 + i)
file_name = train_dir + '\\cat\\cat.' + str(i) + '.jpg'
im = plt.imread(file_name)
plt.imshow(im)
# 定义earlystopping,若验证数据集的精度在2个epoch后不再改进,则停止model fit
monitor_val_acc = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=2)
# 定义model
def define_model():
# 加载VGG16 model
base_model = tf.keras.applications.vgg16.VGG16(include_top = False, input_shape = (224,224,3))
# 冻结bottom layers
for layer in base_model.layers:
layer.trainable = False
# 添加新的classifier layers
flat1 = tf.keras.layers.Flatten()(base_model.layers[-1].output)
class1 = tf.keras.layers.Dense(units = 512, activation = 'relu',kernel_initializer = 'he_uniform')(flat1)
output = tf.keras.layers.Dense(units = 1, activation = 'sigmoid')(class1)
# 定义model
model = tf.keras.models.Model(inputs = base_model.inputs, outputs = output)
# 编译model
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
return model
# 定义ImageDataGenerator,训练model,保存训练历史,保存model文件
def run_validate():
# 生成model
model = define_model()
# 定义ImageDataGenerator,考虑图像增强,设置validation_split,将featurewise_center设置为True
train_datagen = ImageDataGenerator(featurewise_center = True,
rotation_range = 40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.2)
# 手动传递ImageNet训练数据集平均值
train_datagen.mean = [123.68, 116.779, 103.939]
# 定义train_generator和validate_generator,classes根据label进行设置,class_mode根据应用场景设置(二分类为binary),subset根据用途分别设置为training和validation
train_generator = train_datagen.flow_from_directory(directory = train_dir,
target_size = (224,224),
classes = ['cat','dog'],
batch_size = 20,
class_mode = 'binary',
subset = 'training')
validate_generator = train_datagen.flow_from_directory(directory = train_dir,
target_size = (224,224),
classes = ['cat','dog'],
batch_size = 20,
class_mode = 'binary',
subset = 'validation')
history = model.fit_generator(generator = train_generator,
steps_per_epoch = 1000,
epochs = 5,
validation_data = validate_generator,
validation_steps = 250,verbose = 2)
model.save('final_model.h5')
# 训练model
run_validate()