hans

hans

【SSD】用caffe-ssd框架自带VGG网络训练自己的数据集


前言#

网上现有的教程几乎全都只是翻译或者直接使用 VOC 数据集。

我的数据集是从 ILSVRC、ImageNet 拿来的,颜色通道不统一,xml 文件内容格式不统一。

整个过程遇到了大量问题,也写了很多脚本工具。

现在我一一记录下来,造福人类!

一、挑选数据集#

我先是从 ImageNet 官网下载了所有关于杯子的图片

然后从 ILSVRC2011,ILSVRC2012,ILSVRC2013 和 ILSVRC2015 数据集通过搜索 xml 中杯子的代号挑出了包含杯子的数据集。

脚本工具参考: 【Shell】从 ILSVRC_DET 数据集中单独拿出某一类图片和注释文件

1668713291910.jpg

二、处理 xml 文件#

我只需要杯子的信息,其他物体信息要从 xml 文件中删掉。否则生成 lmdb 文件的时候会出现错误,提示 “Unknown name:
xxxxxxxx”。xxxx 就是除了杯子以外的物体的代号。

尝试了很多方法,不多说,看下面具体步骤:

1. 将 Annotations 文件夹改名为:Annos

2. 新建一个空文件夹名字为:Annotations

3. 修改下面名字为 “delete_by_name.py” 的 python 工具代码,只需要修改 if not 后面内容。引号内为你要保留的数据的代号。

4. 运行 python 工具。

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Tue Oct 31 10:03:03 2017

@author: hans

http://blog.csdn.net/renhanchi
"""

import os
import xml.etree.ElementTree as ET

origin_ann_dir = 'Annos/'
new_ann_dir = 'Annotations/'

for dirpaths, dirnames, filenames in os.walk(origin_ann_dir):
  for filename in filenames:
    if os.path.isfile(r'%s%s' %(origin_ann_dir, filename)):
      origin_ann_path = os.path.join(r'%s%s' %(origin_ann_dir, filename))
      new_ann_path = os.path.join(r'%s%s' %(new_ann_dir, filename))
      tree = ET.parse(origin_ann_path)
  
      root = tree.getroot()
      for object in root.findall('object'):
        name = str(object.find('name').text)
        if not (name == "n03147509" or \
                name == "n03216710" or \
                name == "n03438257" or \
                name == "n03797390" or \
                name == "n04559910" or \
                name == "n07930864"):
          root.remove(object)
  
      tree.write(new_ann_path)

1668713301662.jpg

三、生成训练集和验证集 txt 文件#

先新建一个名字为 doc 的文件夹

下面名字为 “cup_list.sh” 代码并不是我最终使用的,你们根据自己情况做适当修改。

#!/bin/sh

classes=(images anno)
root_dir=$(cd $( dirname ${BASH_SOURCE[0]} ) && pwd )

for dataset in train val
do
        if [ $dataset == "train" ]
        then
                data_dir=(WIDER_train)
        fi
        if [ $dataset == "val" ]
        then
                data_dir=(WIDER_val)
        fi
        for cla in ${data_dir[@]}
        do
	        for class in ${classes[@]}
	        do
		        # ls -R $cla/$class/* >> ${class}_${dataset}.txt
                        find ./$cla/$class/ -name "*.jpg" >> ${class}_${dataset}.txt
	        done

                for class in ${classes[@]}
                do
                        # ls -R $cla/$class/* >> ${class}_${dataset}.txt
                        find ./$cla/$class/ -name "*.xml" >> ${class}_${dataset}.txt
                done
        done
        paste -d' ' images_${dataset}.txt anno_${dataset}.txt >> temp_${dataset}.txt
        cat temp_${dataset}.txt | awk 'BEGIN{srand()}{print rand()"\t"$0}' | sort -k1,1 -n | cut -f2- > $dataset.txt
        if [ $dataset == "val" ]
        then
                /home/hans/caffe-ssd/build/tools/get_image_size $root_dir $dataset.txt $dataset"_name_size.txt"
        fi
        rm temp_${dataset}.txt
        rm images_${dataset}.txt
        rm anno_${dataset}.txt
done

1668713315925.jpg

1668713322105.jpg

四、写 labelmap_cup.prototxt#

这个文件放到 doc 目录下。

有几个问题需要注意。

1.label 0 必须是 background

2. 虽然我只检测杯子,但是 xml 文件中杯子 name 的代码有好几个。

我一开始将所有 label 都设置为 1,后来生成 lmdb 文件的时候报错。

我只能乖乖的按顺序写下去,不过问题不大。反正知道 1 到 6 都是杯子就好。

1668713336031.jpg

五、生成 lmdb 文件#

这先是出现了上面提到的 Unknown name 错误,通过修改 xml 解决了。

后来又出现调用 caffe 模块的 Symbol 错误,反正你们跟我走就好,错不了。

先修改一个文件 caffe-ssd/scripts/create_annoset.py

1668713344659.jpg

然后运行 cup_data.sh

`

cur_dir=$(cd $( dirname ${BASH_SOURCE[0]} ) && pwd )
root_dir=/home/hans/caffe-ssd

redo=1
data_root_dir="${cur_dir}"
dataset_name="doc"
mapfile="${cur_dir}/doc/labelmap_cup.prototxt"
anno_type="detection"
db="lmdb"
min_dim=0
max_dim=0
width=0
height=0

extra_cmd="--encode-type=JPEG --encoded"
if [ $redo ]
then
  extra_cmd="$extra_cmd --redo"
fi
for subset in train val
do
  python $root_dir/scripts/create_annoset.py --anno-type=$anno_type --label-map-file=$mapfile --min-dim=$min_dim \
--max-dim=$max_dim --resize-width=$width --resize-height=$height --check-label $extra_cmd $data_root_dir \
$cur_dir/$dataset_name/$subset.txt $data_root_dir/$dataset_name/$subset"_"$db ln/
done
rm -rf  ln/

1668713353033.jpg

六、训练#

先去下载预训练模型放到 doc 目录下。

下载地址:
cs.unc.edu/~wliu/projects/ParseNet/VGG_ILSVRC_16_layers_fc_reduced.caffemodel

修改训练代码真是一件熬心熬力的事儿,路径太多,问题也不少。还好 github issues 上作业挺给力。

先放出我的 ssd_pascal.py 代码:

from __future__ import print_function
import sys
sys.path.append("/home/hans/caffe-ssd/python")  #####改
import caffe
from caffe.model_libs import *
from google.protobuf import text_format

import math
import os
import shutil
import stat
import subprocess

# Add extra layers on top of a "base" network (e.g. VGGNet or Inception).
def AddExtraLayers(net, use_batchnorm=True, lr_mult=1):
    use_relu = True

    # Add additional convolutional layers.
    # 19 x 19
    from_layer = net.keys()[-1]

    # TODO(weiliu89): Construct the name using the last layer to avoid duplication.
    # 10 x 10
    out_layer = "conv6_1"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 256, 1, 0, 1,
        lr_mult=lr_mult)

    from_layer = out_layer
    out_layer = "conv6_2"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 512, 3, 1, 2,
        lr_mult=lr_mult)

    # 5 x 5
    from_layer = out_layer
    out_layer = "conv7_1"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 128, 1, 0, 1,
      lr_mult=lr_mult)

    from_layer = out_layer
    out_layer = "conv7_2"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 256, 3, 1, 2,
      lr_mult=lr_mult)

    # 3 x 3
    from_layer = out_layer
    out_layer = "conv8_1"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 128, 1, 0, 1,
      lr_mult=lr_mult)

    from_layer = out_layer
    out_layer = "conv8_2"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 256, 3, 0, 1,
      lr_mult=lr_mult)

    # 1 x 1
    from_layer = out_layer
    out_layer = "conv9_1"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 128, 1, 0, 1,
      lr_mult=lr_mult)

    from_layer = out_layer
    out_layer = "conv9_2"
    ConvBNLayer(net, from_layer, out_layer, use_batchnorm, use_relu, 256, 3, 0, 1,
      lr_mult=lr_mult)

    return net


### Modify the following parameters accordingly ###
# The directory which contains the caffe code.
# We assume you are running the script at the CAFFE_ROOT.
caffe_root = "/home/hans/caffe-ssd"    #####改

# Set true if you want to start training right after generating all files.
run_soon = True
# Set true if you want to load from most recently saved snapshot.
# Otherwise, we will load from the pretrain_model defined below.
resume_training = True
# If true, Remove old model files.
remove_old_models = False

# The database file for training data. Created by data/VOC0712/create_data.sh
train_data = "/home/hans/data/ImageNet/Detection/cup/doc/train_lmdb"   #########改
# The database file for testing data. Created by data/VOC0712/create_data.sh
test_data = "/home/hans/data/ImageNet/Detection/cup/doc/val_lmdb"    ########改
# Specify the batch sampler.
resize_width = 300
resize_height = 300
resize = "{}x{}".format(resize_width, resize_height)
batch_sampler = [
        {
                'sampler': {
                        },
                'max_trials': 1,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'min_jaccard_overlap': 0.1,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'min_jaccard_overlap': 0.3,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'min_jaccard_overlap': 0.5,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'min_jaccard_overlap': 0.7,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'min_jaccard_overlap': 0.9,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        {
                'sampler': {
                        'min_scale': 0.3,
                        'max_scale': 1.0,
                        'min_aspect_ratio': 0.5,
                        'max_aspect_ratio': 2.0,
                        },
                'sample_constraint': {
                        'max_jaccard_overlap': 1.0,
                        },
                'max_trials': 50,
                'max_sample': 1,
        },
        ]
train_transform_param = {
        'mirror': True,
        'mean_value': [104, 117, 123],
        'force_color': True,  ####改
        'resize_param': {
                'prob': 1,
                'resize_mode': P.Resize.WARP,
                'height': resize_height,
                'width': resize_width,
                'interp_mode': [
                        P.Resize.LINEAR,
                        P.Resize.AREA,
                        P.Resize.NEAREST,
                        P.Resize.CUBIC,
                        P.Resize.LANCZOS4,
                        ],
                },
        'distort_param': {
                'brightness_prob': 0.5,
                'brightness_delta': 32,
                'contrast_prob': 0.5,
                'contrast_lower': 0.5,
                'contrast_upper': 1.5,
                'hue_prob': 0.5,
                'hue_delta': 18,
                'saturation_prob': 0.5,
                'saturation_lower': 0.5,
                'saturation_upper': 1.5,
                'random_order_prob': 0.0,
                },
        'expand_param': {
                'prob': 0.5,
                'max_expand_ratio': 4.0,
                },
        'emit_constraint': {
            'emit_type': caffe_pb2.EmitConstraint.CENTER,
            }
        }
test_transform_param = {
        'mean_value': [104, 117, 123],
        'force_color': True,    ####改
        'resize_param': {
                'prob': 1,
                'resize_mode': P.Resize.WARP,
                'height': resize_height,
                'width': resize_width,
                'interp_mode': [P.Resize.LINEAR],
                },
        }

# If true, use batch norm for all newly added layers.
# Currently only the non batch norm version has been tested.
use_batchnorm = False
lr_mult = 1
# Use different initial learning rate.
if use_batchnorm:
    base_lr = 0.0004
else:
    # A learning rate for batch_size = 1, num_gpus = 1.
    base_lr = 0.00004

root = "/home/hans/data/ImageNet/Detection/cup"    ####改
# Modify the job name if you want.
job_name = "SSD_{}".format(resize)   ####改
# The name of the model. Modify it if you want.
model_name = "VGG_CUP_{}".format(job_name)    ####改

# Directory which stores the model .prototxt file.
save_dir = "{}/doc/{}".format(root, job_name)    ####改
# Directory which stores the snapshot of models.
snapshot_dir = "{}/models/{}".format(root, job_name)    ####改
# Directory which stores the job script and log file.
job_dir = "{}/jobs/{}".format(root, job_name)    ####改
# Directory which stores the detection results.
output_result_dir = "{}/results/{}".format(root, job_name)    ####改

# model definition files.
train_net_file = "{}/train.prototxt".format(save_dir)
test_net_file = "{}/test.prototxt".format(save_dir)
deploy_net_file = "{}/deploy.prototxt".format(save_dir)
solver_file = "{}/solver.prototxt".format(save_dir)
# snapshot prefix.
snapshot_prefix = "{}/{}".format(snapshot_dir, model_name)
# job script path.
job_file = "{}/{}.sh".format(job_dir, model_name)

# Stores the test image names and sizes. Created by data/VOC0712/create_list.sh
name_size_file = "{}/doc/val_name_size.txt".format(root)    ####改
# The pretrained model. We use the Fully convolutional reduced (atrous) VGGNet.
pretrain_model = "{}/doc/VGG_ILSVRC_16_layers_fc_reduced.caffemodel".format(root)    ####改
# Stores LabelMapItem.
label_map_file = "{}/doc/labelmap_cup.prototxt".format(root)    ####改

# MultiBoxLoss parameters.
num_classes = 7    ####改
share_location = True
background_label_id=0
train_on_diff_gt = True
normalization_mode = P.Loss.VALID
code_type = P.PriorBox.CENTER_SIZE
ignore_cross_boundary_bbox = False
mining_type = P.MultiBoxLoss.MAX_NEGATIVE
neg_pos_ratio = 3.
loc_weight = (neg_pos_ratio + 1.) / 4.
multibox_loss_param = {
    'loc_loss_type': P.MultiBoxLoss.SMOOTH_L1,
    'conf_loss_type': P.MultiBoxLoss.SOFTMAX,
    'loc_weight': loc_weight,
    'num_classes': num_classes,
    'share_location': share_location,
    'match_type': P.MultiBoxLoss.PER_PREDICTION,
    'overlap_threshold': 0.5,
    'use_prior_for_matching': True,
    'background_label_id': background_label_id,
    'use_difficult_gt': train_on_diff_gt,
    'mining_type': mining_type,
    'neg_pos_ratio': neg_pos_ratio,
    'neg_overlap': 0.5,
    'code_type': code_type,
    'ignore_cross_boundary_bbox': ignore_cross_boundary_bbox,
    }
loss_param = {
    'normalization': normalization_mode,
    }

# parameters for generating priors.
# minimum dimension of input image
min_dim = 300
# conv4_3 ==> 38 x 38
# fc7 ==> 19 x 19
# conv6_2 ==> 10 x 10
# conv7_2 ==> 5 x 5
# conv8_2 ==> 3 x 3
# conv9_2 ==> 1 x 1
mbox_source_layers = ['conv4_3', 'fc7', 'conv6_2', 'conv7_2', 'conv8_2', 'conv9_2']
# in percent %
min_ratio = 20
max_ratio = 90
step = int(math.floor((max_ratio - min_ratio) / (len(mbox_source_layers) - 2)))
min_sizes = []
max_sizes = []
for ratio in xrange(min_ratio, max_ratio + 1, step):
  min_sizes.append(min_dim * ratio / 100.)
  max_sizes.append(min_dim * (ratio + step) / 100.)
min_sizes = [min_dim * 10 / 100.] + min_sizes
max_sizes = [min_dim * 20 / 100.] + max_sizes
steps = [8, 16, 32, 64, 100, 300]
aspect_ratios = [[2], [2, 3], [2, 3], [2, 3], [2], [2]]
# L2 normalize conv4_3.
normalizations = [20, -1, -1, -1, -1, -1]
# variance used to encode/decode prior bboxes.
if code_type == P.PriorBox.CENTER_SIZE:
  prior_variance = [0.1, 0.1, 0.2, 0.2]
else:
  prior_variance = [0.1]
flip = True
clip = False

# Solver parameters.
# Defining which GPUs to use.
gpus = "7"    ####改
gpulist = gpus.split(",")
num_gpus = len(gpulist)

# Divide the mini-batch to different GPUs.
batch_size = 32
accum_batch_size = 32
iter_size = accum_batch_size / batch_size
solver_mode = P.Solver.CPU
device_id = 0
batch_size_per_device = batch_size
if num_gpus > 0:
  batch_size_per_device = int(math.ceil(float(batch_size) / num_gpus))
  iter_size = int(math.ceil(float(accum_batch_size) / (batch_size_per_device * num_gpus)))
  solver_mode = P.Solver.GPU
  device_id = int(gpulist[0])

if normalization_mode == P.Loss.NONE:
  base_lr /= batch_size_per_device
elif normalization_mode == P.Loss.VALID:
  base_lr *= 25. / loc_weight
elif normalization_mode == P.Loss.FULL:
  # Roughly there are 2000 prior bboxes per image.
  # TODO(weiliu89): Estimate the exact # of priors.
  base_lr *= 2000.

# Evaluate on whole test set.
num_test_image = 2000    ####改
test_batch_size = 8
# Ideally test_batch_size should be divisible by num_test_image,
# otherwise mAP will be slightly off the true value.
test_iter = int(math.ceil(float(num_test_image) / test_batch_size))

solver_param = {
    # Train parameters
    'base_lr': base_lr,
    'weight_decay': 0.0005,
    'lr_policy': "multistep",
    'stepvalue': [80000, 100000, 120000],
    'gamma': 0.1,
    'momentum': 0.9,
    'iter_size': iter_size,
    'max_iter': 120000,
    'snapshot': 80000,
    'display': 10,
    'average_loss': 10,
    'type': "SGD",
    'solver_mode': solver_mode,
    'device_id': device_id,
    'debug_info': False,
    'snapshot_after_train': True,
    # Test parameters
    'test_iter': [test_iter],
    'test_interval': 100,
    'eval_type': "detection",
    'ap_version': "11point",
    'test_initialization': True,
    }

# parameters for generating detection output.
det_out_param = {
    'num_classes': num_classes,
    'share_location': share_location,
    'background_label_id': background_label_id,
    'nms_param': {'nms_threshold': 0.45, 'top_k': 400},
    'save_output_param': {
        'output_directory': output_result_dir,
        'output_name_prefix': "comp4_det_test_",
        'output_format': "VOC",
        'label_map_file': label_map_file,
        'name_size_file': name_size_file,
        'num_test_image': num_test_image,
        },
    'keep_top_k': 200,
    'confidence_threshold': 0.01,
    'code_type': code_type,
    }

# parameters for evaluating detection results.
det_eval_param = {
    'num_classes': num_classes,
    'background_label_id': background_label_id,
    'overlap_threshold': 0.5,
    'evaluate_difficult_gt': False,
    'name_size_file': name_size_file,
    }

### Hopefully you don't need to change the following ###
# Check file.
check_if_exist(train_data)
check_if_exist(test_data)
check_if_exist(label_map_file)
check_if_exist(pretrain_model)
make_if_not_exist(save_dir)
make_if_not_exist(job_dir)
make_if_not_exist(snapshot_dir)

# Create train net.
net = caffe.NetSpec()
net.data, net.label = CreateAnnotatedDataLayer(train_data, batch_size=batch_size_per_device,
        train=True, output_label=True, label_map_file=label_map_file,
        transform_param=train_transform_param, batch_sampler=batch_sampler)

VGGNetBody(net, from_layer='data', fully_conv=True, reduced=True, dilated=True,
    dropout=False)

AddExtraLayers(net, use_batchnorm, lr_mult=lr_mult)

mbox_layers = CreateMultiBoxHead(net, data_layer='data', from_layers=mbox_source_layers,
        use_batchnorm=use_batchnorm, min_sizes=min_sizes, max_sizes=max_sizes,
        aspect_ratios=aspect_ratios, steps=steps, normalizations=normalizations,
        num_classes=num_classes, share_location=share_location, flip=flip, clip=clip,
        prior_variance=prior_variance, kernel_size=3, pad=1, lr_mult=lr_mult)

# Create the MultiBoxLossLayer.
name = "mbox_loss"
mbox_layers.append(net.label)
net[name] = L.MultiBoxLoss(*mbox_layers, multibox_loss_param=multibox_loss_param,
        loss_param=loss_param, include=dict(phase=caffe_pb2.Phase.Value('TRAIN')),
        propagate_down=[True, True, False, False])

with open(train_net_file, 'w') as f:
    print('name: "{}_train"'.format(model_name), file=f)
    print(net.to_proto(), file=f)
shutil.copy(train_net_file, job_dir)

# Create test net.
net = caffe.NetSpec()
net.data, net.label = CreateAnnotatedDataLayer(test_data, batch_size=test_batch_size,
        train=False, output_label=True, label_map_file=label_map_file,
        transform_param=test_transform_param)

VGGNetBody(net, from_layer='data', fully_conv=True, reduced=True, dilated=True,
    dropout=False)

AddExtraLayers(net, use_batchnorm, lr_mult=lr_mult)

mbox_layers = CreateMultiBoxHead(net, data_layer='data', from_layers=mbox_source_layers,
        use_batchnorm=use_batchnorm, min_sizes=min_sizes, max_sizes=max_sizes,
        aspect_ratios=aspect_ratios, steps=steps, normalizations=normalizations,
        num_classes=num_classes, share_location=share_location, flip=flip, clip=clip,
        prior_variance=prior_variance, kernel_size=3, pad=1, lr_mult=lr_mult)

conf_name = "mbox_conf"
if multibox_loss_param["conf_loss_type"] == P.MultiBoxLoss.SOFTMAX:
  reshape_name = "{}_reshape".format(conf_name)
  net[reshape_name] = L.Reshape(net[conf_name], shape=dict(dim=[0, -1, num_classes]))
  softmax_name = "{}_softmax".format(conf_name)
  net[softmax_name] = L.Softmax(net[reshape_name], axis=2)
  flatten_name = "{}_flatten".format(conf_name)
  net[flatten_name] = L.Flatten(net[softmax_name], axis=1)
  mbox_layers[1] = net[flatten_name]
elif multibox_loss_param["conf_loss_type"] == P.MultiBoxLoss.LOGISTIC:
  sigmoid_name = "{}_sigmoid".format(conf_name)
  net[sigmoid_name] = L.Sigmoid(net[conf_name])
  mbox_layers[1] = net[sigmoid_name]

net.detection_out = L.DetectionOutput(*mbox_layers,
    detection_output_param=det_out_param,
    include=dict(phase=caffe_pb2.Phase.Value('TEST')))
net.detection_eval = L.DetectionEvaluate(net.detection_out, net.label,
    detection_evaluate_param=det_eval_param,
    include=dict(phase=caffe_pb2.Phase.Value('TEST')))

with open(test_net_file, 'w') as f:
    print('name: "{}_test"'.format(model_name), file=f)
    print(net.to_proto(), file=f)
shutil.copy(test_net_file, job_dir)

# Create deploy net.
# Remove the first and last layer from test net.
deploy_net = net
with open(deploy_net_file, 'w') as f:
    net_param = deploy_net.to_proto()
    # Remove the first (AnnotatedData) and last (DetectionEvaluate) layer from test net.
    del net_param.layer[0]
    del net_param.layer[-1]
    net_param.name = '{}_deploy'.format(model_name)
    net_param.input.extend(['data'])
    net_param.input_shape.extend([
        caffe_pb2.BlobShape(dim=[1, 3, resize_height, resize_width])])
    print(net_param, file=f)
shutil.copy(deploy_net_file, job_dir)

# Create solver.
solver = caffe_pb2.SolverParameter(
        train_net=train_net_file,
        test_net=[test_net_file],
        snapshot_prefix=snapshot_prefix,
        **solver_param)

with open(solver_file, 'w') as f:
    print(solver, file=f)
shutil.copy(solver_file, job_dir)

max_iter = 0
# Find most recent snapshot.
for file in os.listdir(snapshot_dir):
  if file.endswith(".solverstate"):
    basename = os.path.splitext(file)[0]
    iter = int(basename.split("{}_iter_".format(model_name))[1])
    if iter > max_iter:
      max_iter = iter

train_src_param = '--weights="{}" \\\n'.format(pretrain_model)
if resume_training:
  if max_iter > 0:
    train_src_param = '--snapshot="{}_iter_{}.solverstate" \\\n'.format(snapshot_prefix, max_iter)

if remove_old_models:
  # Remove any snapshots smaller than max_iter.
  for file in os.listdir(snapshot_dir):
    if file.endswith(".solverstate"):
      basename = os.path.splitext(file)[0]
      iter = int(basename.split("{}_iter_".format(model_name))[1])
      if max_iter > iter:
        os.remove("{}/{}".format(snapshot_dir, file))
    if file.endswith(".caffemodel"):
      basename = os.path.splitext(file)[0]
      iter = int(basename.split("{}_iter_".format(model_name))[1])
      if max_iter > iter:
        os.remove("{}/{}".format(snapshot_dir, file))

# Create job file.
with open(job_file, 'w') as f:
  f.write('cd {}\n'.format(caffe_root))
  f.write('./build/tools/caffe train \\\n')
  f.write('--solver="{}" \\\n'.format(solver_file))
  f.write(train_src_param)
  if solver_param['solver_mode'] == P.Solver.GPU:
    f.write('--gpu {} 2>&1 | tee {}/{}.log\n'.format(gpus, job_dir, model_name))
  else:
    f.write('2>&1 | tee {}/{}.log\n'.format(job_dir, model_name))

# Copy the python script to job_dir.
py_file = os.path.abspath(__file__)
shutil.copy(py_file, job_dir)

# Run the job.
os.chmod(job_file, stat.S_IRWXU)
if run_soon:
  subprocess.call(job_file, shell=True)

上面 237 行到 267 行你就慢慢搞吧。当然如果你按照我的文件夹布局来的话,只需要修改 237 行。

修改 179 行,是因为训练阶段出现 “”OpenCV Error: Assertion failed ((scn == 3 || scn == 4) &&
(depth == CV_8U ||............" 这个错误。

修改 216 行,是因为验证阶段出现 “Check
failed:std::equal (top_shape.begin ()+1,top_shape.begin ()+4,shape.begin ()+1)” 这个错误。

修改 270 行,你的类别数 + 1。注意这个类别数是 labelmap_cup.prototxt 中最大索引 + 1。

修改 363 行,为你的测试集图片数量。

其他要修改的看上面代码吧。我都标记好了。

其余的参数调节就自己看代码改吧,也不难。

最后运行开始训练。

七、训练输出可视化(2017.11.02)#

拿之前给 caffe 用的改了改。

有一个变动就是增加了一个 倍数 time 的变量,因为有时候输出波动太大,按一定倍数取平均会让曲线平滑一点。

第一个参数是 log 文件路径。

需要修改代码中 display 和 test_iterval 的数值个 solver.prototxt 中一致。

time 是倍数,想看原始数据曲线的话就设置为 1。

代码:

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Thu Nov  2 14:35:42 2017

@author: hans

http://blog.csdn.net/renhanchi
"""

import matplotlib.pyplot as plt
import numpy as np
import commands
import argparse

parser = argparse.ArgumentParser()
parser.add_argument(
    '-p','--log_path',
    type = str,
    default = '',
    help = """\
    path to log file\
    """
)

FLAGS = parser.parse_args()

train_log_file = FLAGS.log_path


display = 10 #solver
test_interval = 100 #solver

time = 5

train_output = commands.getoutput("cat " + train_log_file + " | grep 'Train net output #0' | awk '{print $11}'")  #train mbox_loss
accu_output = commands.getoutput("cat " + train_log_file + " | grep 'Test net output #0' | awk '{print $11}'") #test detection_eval

train_loss = train_output.split("\n")
test_accu = accu_output.split("\n")
  
def reduce_data(data):
  iteration = len(data)/time*time
  _data = data[0:iteration]
  if time > 1:
    data_ = []
    for i in np.arange(len(data)/time):
      sum_data = 0
      for j in np.arange(time):
        index = i*time + j
        sum_data += float(_data[index])
      data_.append(sum_data/float(time))
  else:
    data_ = data
  return data_

train_loss_ = reduce_data(train_loss)
test_accu_ = reduce_data(test_accu)

_,ax1 = plt.subplots()
ax2 = ax1.twinx()

ax1.plot(time*display*np.arange(len(train_loss_)), train_loss_)
ax2.plot(time*test_interval*np.arange(len(test_accu_)), test_accu_, 'r')

ax1.set_xlabel('Iteration')
ax1.set_ylabel('Train Loss')
ax2.set_ylabel('Test Accuracy')
plt.show()

八、测试模型效果(2017.11.03)#

模型训练好要看最终效果如何。

原作者给了一个 python 工具,我觉得不好用。你们可以自己看看,名字是 “ssd_pascal_webcam.py”

下面我介绍一下自己手动做检测的步骤:

先准备好三个文件,deploy.prototxt,labelmap_cup.prototxt,xxxxx.caffemodel

修改 deploy.prototxt 文件的第一层和最后一层:

name: "VGG_VOC0712_SSD_300x300_test"
layer {
  name: "data"
  type: "VideoData"
  top: "data"
  transform_param {
    mean_value: 104.0
    mean_value: 117.0
    mean_value: 123.0
    resize_param {
      prob: 1.0
      resize_mode: WARP
      height: 300
      width: 300
      interp_mode: LINEAR
    }
  }
  data_param {
    batch_size: 1
  }
  video_data_param {
    video_type: WEBCAM
    device_id: 0 ####摄像头编号
    skip_frames: 0 ####是否跳帧
  }
}
layer {
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
...
...
...
...
...
...
layer {
  name: "mbox_conf_flatten"
  type: "Flatten"
  bottom: "mbox_conf_softmax"
  top: "mbox_conf_flatten"
  flatten_param {
    axis: 1
  }
}
layer {
  name: "detection_out"
  type: "DetectionOutput"
  bottom: "mbox_loc"
  bottom: "mbox_conf_flatten"
  bottom: "mbox_priorbox"
  bottom: "data"
  top: "detection_out"
  include {
    phase: TEST
  }
  transform_param {
    mean_value: 104.0
    mean_value: 117.0
    mean_value: 123.0
    resize_param {
      prob: 1.0
      resize_mode: WARP
      height: 480  ####摄像头高宽,可以设置大点,会放大显示
      width: 640
      interp_mode: LINEAR
    }
  }
  detection_output_param {
    num_classes: 7  ####类别数 + 1
    share_location: true
    background_label_id: 0
    nms_param {
      nms_threshold: 0.449999988079
      top_k: 400
    }
    save_output_param {
      label_map_file: "labelmap_cup.prototxt"  #####改
    }
    code_type: CENTER_SIZE
    keep_top_k: 200
    confidence_threshold: 0.899999976158
    visualize: true
    visualize_threshold: 0.600000023842  ###只显示置信度高于这个值的结果
  }
}
layer {
  name: "slience"
  type: "Silence"
  bottom: "detection_out"
  include {
    phase: TEST
  }
}

下面是测试用的脚本内容:

/home/hans/caffe-ssd/build/tools/caffe test \
--model="deploy.prototxt" \
--weights="xxxxx.caffemodel" \
--iterations="536870911" \
--gpu 0

iteration 是 int 类型最大值。

1668713377173.jpg

1668713386585.jpg

标准杯子还是很稳定的,有时候会把柱状物检测出来。

现在这个模型还不是最终的,在我自己的验证集上 detection_eval 在 0.72 左右。

后记#

这篇博客我也会持续更新。包括输出结果分析,可视化,更换网络模型等等。

这次用的是 VGGnet,后面我还会用到 mobileNet。

有一个问题就是均值计算,我还没测试用 caffe 自带的 creat_mean.sh 好用不好用。

----【2017.11.20 解决均值问题】--------------------------------------

自带 make_mean.sh 并不能求均值,发现有两个转 lmdb 工具,一个带 annotation,一个不带。ssd 用的带 annotation 的转换工具。

更具体内容请参考末尾: 【SSD】用 caffe-ssd 框架 MobileNet 网络训练自己的数据集

---- 【2017.11.2 更新】 ----- 多 GPU----------------------------------------

这个框架好像可以直接用多 GPU 运行的,没验证。

我服务器上已经安装了 nccl,但是在 make 的时候告诉我都已经编译好了。

我没多管直接 3 个 GPU 上去试试,可行!不过报错 centos kernel: BUG: soft lockup - CPU#3 stuck for 23s!
[kworker/3:0:14900]

吓尿!我另一块 GPU 在跑数据。

后来用两块 GPU 跑,0 次迭代正常,第一次迭代 loss 就 nan 了,改了几次参数无果。

还是乖乖的用一块卡跑吧~~


Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.