如何使用转移学习来解决Keras回归问题?

问题描述 投票:0回答:3

我正在尝试使用转移学习和微调来构建CNN。任务是建立一个CNN与Keras获取图像(房屋照片)和CSV文件(照片名称和价格)的数据集,并用这些输入训练CNN。但我有一个问题,我无法解决。

这是我的代码:

import pandas as pd
from google.colab import drive
from sklearn.model_selection import train_test_split
from keras import applications
from keras import optimizers
from keras import backend
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model, load_model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from matplotlib import pyplot

drive.mount('/content/gdrive')
!unzip -n '/content/gdrive/My Drive/HOUSEPRICES.zip' >> /dev/null

data_path = 'HOUSEPRICES/'
imgs_path = data_path + "images/"
labels_path = data_path + "prices.csv"

labels = pd.read_csv(labels_path), dtype = {"prices": "float64"})

seed = 0
train_data, test_data = train_test_split(labels, test_size=0.25, random_state=seed) 
dev_data, test_data = train_test_split(test_data, test_size=0.5, random_state=seed)  

train_data = train_data.reset_index(drop=True)
dev_data = dev_data.reset_index(drop=True)
test_data = test_data.reset_index(drop=True)

datagen = ImageDataGenerator(rescale=1./255)

img_width = 320
img_height = 240  
x_col = 'image_name'          
y_col = 'prices'


batch_size = 64              
train_dataset = datagen.flow_from_dataframe(dataframe=train_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                            class_mode="input", target_size=(img_width,img_height), batch_size=batch_size)
dev_dataset = datagen.flow_from_dataframe(dataframe=dev_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                          class_mode="input",target_size=(img_width,img_height), batch_size=batch_size)
test_dataset = datagen.flow_from_dataframe(dataframe=test_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                           class_mode="input", target_size=(img_width,img_height), batch_size=batch_size)


base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width,img_height,3))


for layer in base_model.layers:
    layer.trainable = False   

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)

predictions = Dense(1, activation='linear')(x) 

model = Model(inputs=[base_model.input], outputs=[predictions])
model.summary()   

model.compile(loss='mse',     
              optimizer=optimizers.adam(lr=1e-5),  
              metrics=['mse'])


model.fit_generator(train_dataset,
                    epochs=20,  
                    verbose=2,  
                    steps_per_epoch=len(train_data)/batch_size,
                    validation_data=dev_dataset,
                    validation_steps=len(dev_data)/batch_size)

test_loss, test_mse = model.evaluate_generator(test_dataset,                                                   steps=len(test_data)/batch_size, verbose=1)

我收到这个错误:

ValueError:输入0与图层flatten_9不兼容:预期min_ndim = 3,发现ndim = 2

我的代码有什么问题?可能我没有正确构建数据集(图像+数值)?或者它的模型架构有问题?我该如何修复代码?

python tensorflow keras transfer-learning
3个回答
0
投票

Flatten(),将高维向量转换为二维向量。如果你已经有一个二维向量,那么你不需要Flatten()


0
投票
import pandas as pd
from google.colab import drive
from sklearn.model_selection import train_test_split
from keras import applications
from keras import optimizers
from keras import backend
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model, load_model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from matplotlib import pyplot

drive.mount('/content/gdrive')
!unzip -n '/content/gdrive/My Drive/HOUSEPRICES.zip' >> /dev/null

data_path = 'HOUSEPRICES/'
imgs_path = data_path + "images/"
labels_path = data_path + "prices.csv"

labels = pd.read_csv(labels_path), dtype = {"prices": "float64"})

seed = 0
train_data, test_data = train_test_split(labels, test_size=0.25, random_state=seed) 
dev_data, test_data = train_test_split(test_data, test_size=0.5, random_state=seed)  

train_data = train_data.reset_index(drop=True)
dev_data = dev_data.reset_index(drop=True)
test_data = test_data.reset_index(drop=True)

datagen = ImageDataGenerator(rescale=1./255)

img_width = 320
img_height = 240  
x_col = 'image_name'          
y_col = 'prices'


batch_size = 64              
train_dataset = datagen.flow_from_dataframe(dataframe=train_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                            class_mode="other", target_size=(img_width,img_height), batch_size=batch_size)
dev_dataset = datagen.flow_from_dataframe(dataframe=dev_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                          class_mode="other",target_size=(img_width,img_height), batch_size=batch_size)
test_dataset = datagen.flow_from_dataframe(dataframe=test_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                           class_mode="other", target_size=(img_width,img_height), batch_size=batch_size)


base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width,img_height,3))


for layer in base_model.layers:
    layer.trainable = False   

x = base_model.output
x = GlobalAveragePooling2D()(x)    
x = Dense(256, activation='relu')(x)
x = Dropout(0.4)(x)
x = Dense(256, activation='relu')(x)

predictions = Dense(1, activation='linear')(x) 

model = Model(inputs=[base_model.input], outputs=[predictions])
model.summary()   

model.compile(loss='mse',     
              optimizer=optimizers.adam(lr=1e-5),  
              metrics=['mse'])


model.fit_generator(train_dataset,
                    epochs=20,  
                    verbose=2,  
                    steps_per_epoch=len(train_data)/batch_size,
                    validation_data=dev_dataset,
                    validation_steps=len(dev_data)/batch_size)

test_loss, test_mse = model.evaluate_generator(test_dataset, steps=len(test_data)/batch_size, verbose=1)

0
投票

GlobalAveragePooling2D对空间数据进行汇总。输出形状为(batch_size,channels)。因此,这可以直接馈送到Dense层而无需Flatten。要修复代码,请删除以下行:

x = Flatten()(x) 

有关如何微调网络的更多示例,请参阅此链接。

https://keras.io/applications/

class_mode =“input”用于自动编码器;这就是为什么目标不具有与输入相同的形状的错误。

class_mode ='other'有效,因为定义了y_col。

https://keras.io/preprocessing/image/#flow_from_dataframe

© www.soinside.com 2019 - 2024. All rights reserved.