CNN多类分类器0准确性和巨大损失

问题描述 投票:0回答:1

最近,我一直对学习CNN感兴趣,尤其是使用图像分类器。以前,我尝试使用自行车和汽车图像之间的2类图像进行图像分类,但是我遇到了一些错误,但是由于这里的其他导师,我终于完成了它的练习。

这次,我想对25个班级进行图像分类。因此,有25种类别的食物图片(例如,苹果派,果仁蜜饼,鸡肉派等),我想用它制作一个准确的模型。运行代码后,我发现了25次迭代,验证损失约为8,并且验证准确性始终为0.0000e + 00。因此,在那之后我又添加了2个conv层,所以我有了4个conv2d。但是我收到了相同的结果。

这是我的代码:

import cv2
import numpy as np
import os
import pickle

CATEGORIES = ["apple_pie", "baby_back_ribs", "baklava", "caesar_salad", "chocolate_cake","donuts",
              "french_fries", "fried_calamari", "grilled_salmon", "hamburger", "hot_dog",
              "ice_cream", "lasagna", "macaroni_and_cheese", "nachos", "omelette", "onion_rings", "pancakes", "pizza",
              "risotto", "spaghetti", "steak", "tacos", "tiramisu", "waffles"]
DATALOC = "D:/Foods/Datasets"
IMAGE_SIZE = 50

data_training = []

def create_data_training():
    for category in CATEGORIES:
        path = os.path.join(DATALOC, category)
        class_num = CATEGORIES.index(category)
        for image in os.listdir(path):
            try:
                image_array = cv2.imread(os.path.join(path,image), cv2.IMREAD_GRAYSCALE)
                new_image_array = cv2.resize(image_array, (IMAGE_SIZE,IMAGE_SIZE))
                data_training.append([new_image_array,class_num])
            except Exception as exc:
                pass

create_data_training()

X = []
y = []

for features, label in data_training:
    X.append(features)
    y.append(label)

X = np.array(X).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
y = np.array(y)

pickle_out = open("X.pickle", "wb")
pickle.dump(X, pickle_out)
pickle_out.close()

pickle_out = open("y.pickle", "wb")
pickle.dump(y, pickle_out)
pickle_out.close()

pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)

这是我的型号代码:

import pickle
import tensorflow as tf
import time
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D

NAME = "Foods-Model-{}".format(int(time.time()))
tensorboard = TensorBoard(log_dir='logs\{}'.format(NAME))

X = pickle.load(open("X.pickle","rb"))
y = pickle.load(open("y.pickle","rb"))

X = X/255.0

model = Sequential()
model.add(Conv2D(64,(3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))

model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))

model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))

model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))

model.add(Flatten())

model.add(Activation("relu"))
model.add(Dense(64))

model.add(Dense(25))
model.add(Activation('softmax'))

model.compile(loss = "sparse_categorical_crossentropy", optimizer = "adam", metrics = ['accuracy'])

model.fit(X, y, batch_size = 16, epochs = 25 , validation_split = 0.1, callbacks = [tensorboard])

经过5次迭代后,我决定中断该过程,因为没有改善的迹象。这是我的结果:

Train on 13950 samples, validate on 1550 samples
Epoch 1/25
13950/13950 [==============================] - 705s 51ms/sample - loss: 3.2912 - accuracy: 0.0576 - val_loss: 8.9127 - val_accuracy: 0.0052
Epoch 2/25
13950/13950 [==============================] - 865s 62ms/sample - loss: 3.1145 - accuracy: 0.1075 - val_loss: 10.5061 - val_accuracy: 0.0058
Epoch 3/25
13950/13950 [==============================] - 635s 45ms/sample - loss: 2.9650 - accuracy: 0.1452 - val_loss: 9.3719 - val_accuracy: 6.4516e-04
Epoch 4/25
13950/13950 [==============================] - 624s 45ms/sample - loss: 2.8481 - accuracy: 0.1782 - val_loss: 11.6880 - val_accuracy: 0.0013
Epoch 5/25
 1424/13950 [==>...........................] - ETA: 9:20 - loss: 2.7142 - accuracy: 0.2053

谢谢您的时间

python tensorflow keras neural-network multiclass-classification
1个回答
-1
投票

pre-processing中,您需要使用img_to_array方法:

from keras.preprocessing.image import img_to_array

new_image_array = cv2.resize(image_array, (IMAGE_SIZE,IMAGE_SIZE))
new_image_array = img_to_array(new_image_array)
data_training.append([new_image_array,class_num])
© www.soinside.com 2019 - 2024. All rights reserved.