使用Caffe进行多级和多标签图像分类

问题描述 投票:0回答:1

我正在尝试在caffe中创建一个单一的多类和多标签网络配置。

让我们说狗的分类:狗是小还是大? (课)它是什么颜色的? (上课)有领子吗? (标签)

这个东西可以使用caffe吗?这样做的正确方法是什么?

只是试图理解实用的方法..在创建包含图像的所有标签的2个.text文件(一个用于训练,一个用于验证)之后,例如:

/train/img/1.png 0 4 18
/train/img/2.png 1 7 17 33
/train/img/3.png 0 4 17

运行py脚本:

import h5py, os
import caffe
import numpy as np

SIZE = 227 # fixed size to all images
with open( 'train.txt', 'r' ) as T :
    lines = T.readlines()
# If you do not have enough memory split data into
# multiple batches and generate multiple separate h5 files
X = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' ) 
y = np.zeros( (len(lines),1), dtype='f4' )
for i,l in enumerate(lines):
    sp = l.split(' ')
    img = caffe.io.load_image( sp[0] )
    img = caffe.io.resize( img, (SIZE, SIZE, 3) ) # resize to fixed size
    # you may apply other input transformations here...
    # Note that the transformation should take img from size-by-size-by-3 and transpose it to 3-by-size-by-size
    # for example
    transposed_img = img.transpose((2,0,1))[::-1,:,:] # RGB->BGR
    X[i] = transposed_img
    y[i] = float(sp[1])
with h5py.File('train.h5','w') as H:
    H.create_dataset( 'X', data=X ) # note the name X given to the dataset!
    H.create_dataset( 'y', data=y ) # note the name y given to the dataset!
with open('train_h5_list.txt','w') as L:
    L.write( 'train.h5' ) # list all h5 files you are going to use

并创建train.h5和val.h5(是包含图像的X数据集,Y是否包含标签?)。

替换我的网络输入图层:

layers { 
 name: "data" 
 type: DATA 
 top:  "data" 
 top:  "label" 
 data_param { 
   source: "/home/gal/digits/digits/jobs/20181010-191058-21ab/train_db" 
   backend: LMDB 
   batch_size: 64 
 } 
 transform_param { 
    crop_size: 227 
    mean_file: "/home/gal/digits/digits/jobs/20181010-191058-21ab/mean.binaryproto" 
    mirror: true 
  } 
  include: { phase: TRAIN } 
} 
layers { 
 name: "data" 
 type: DATA 
 top:  "data" 
 top:  "label" 
 data_param { 
   source: "/home/gal/digits/digits/jobs/20181010-191058-21ab/val_db"  
   backend: LMDB 
   batch_size: 64
 } 
 transform_param { 
    crop_size: 227 
    mean_file: "/home/gal/digits/digits/jobs/20181010-191058-21ab/mean.binaryproto" 
    mirror: true 
  } 
  include: { phase: TEST } 
} 

layer {
  type: "HDF5Data"
  top: "X" # same name as given in create_dataset!
  top: "y"
  hdf5_data_param {
    source: "train_h5_list.txt" # do not give the h5 files directly, but the list.
    batch_size: 32
  }
  include { phase:TRAIN }
}

layer {
  type: "HDF5Data"
  top: "X" # same name as given in create_dataset!
  top: "y"
  hdf5_data_param {
    source: "val_h5_list.txt" # do not give the h5 files directly, but the list.
    batch_size: 32
  }
  include { phase:TEST }
}

我猜HDF5不需要mean.binaryproto?

接下来,输出层应如何更改以输出多个标签概率?我想我需要交叉熵层而不是softmax?这是当前的输出图层:

layers {
  bottom: "prob"
  bottom: "label"
  top: "loss"
  name: "loss"
  type: SOFTMAX_LOSS
  loss_weight: 1
}
layers {
  name: "accuracy"
  type: ACCURACY
  bottom: "prob"
  bottom: "label"
  top: "accuracy"
  include: { phase: TEST }
}
image-processing machine-learning computer-vision caffe multilabel-classification
1个回答
0
投票

Mean subtraction

虽然lmdb输入数据层能够为您处理各种输入转换,但"HDF5Data"层不支持此功能。 因此,在创建hdf5文件时,必须注意所有输入转换(特别是平均减法)。 查看代码所在的位置

# you may apply other input transformations here...

Multiple labels

虽然.txt为每个图像列出了几个标签,但您只将第一个标签保存到hdf5文件中。如果您想使用这些标签,您必须将它们送到网上。 从您的示例中立即出现的一个问题是,每个训练图像都没有固定数量的标签 - 为什么?这是什么意思? 假设每个图像有三个标签(在.txt文件中):

<filename> <dog size> <dog color> <has collar>

然后你可以在你的hdf5中使用y_sizey_colory_collar(而不是单个y)。

y_size[i] = float(spl[1])
y_color[i] = float(spl[2])
y_collar[i] = float(spl[3])

您的输入数据层将相应地具有更多"top"s:

layer {
  type: "HDF5Data"
  top: "X" # same name as given in create_dataset!
  top: "y_size"
  top: "y_color"
  top: "y_collar"
  hdf5_data_param {
    source: "train_h5_list.txt" # do not give the h5 files directly, but the list.
    batch_size: 32
  }
  include { phase:TRAIN }
}

Prediction

目前,您的网络仅预测单个标签(使用top: "prob"的图层)。你需要你的网络来预测所有三个标签,因此你需要添加计算top: "prob_size"top: "prob_color"top: "prob_collar"的图层(每个"prob_*"的不同图层)。 一旦您对每个标签进行预测,您就需要一个损失(同样,每个标签都有损失)。

© www.soinside.com 2019 - 2024. All rights reserved.