如何解决“No Algorithm Worked”Keras 错误?

问题描述 投票:0回答:9

我尝试在 Keras 中开发 FCN-16 模型。我使用类似的 FCN-16 模型权重初始化了权重。

def FCN8 (nClasses, input_height=256, input_width=256):

    ## input_height and width must be devisible by 32 because maxpooling with filter size = (2,2) is operated 5 times,
    ## which makes the input_height and width 2^5 = 32 times smaller
    assert input_height % 32 == 0
    assert input_width % 32 == 0
    IMAGE_ORDERING = "channels_last"

    img_input = Input(shape=(input_height, input_width, 3))  ## Assume 224,224,3

    ## Block 1
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_1', data_format=IMAGE_ORDERING)(
        img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
    f1 = x

    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
    f2 = x

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
    pool3 = x

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_3', data_format=IMAGE_ORDERING)(x)
    pool4 = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(
        x)  ## (None, 14, 14, 512)

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_1', data_format=IMAGE_ORDERING)(pool4)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_3', data_format=IMAGE_ORDERING)(x)
    pool5 = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(
        x) 

    n = 4096
    o = (Conv2D(n, (7, 7), activation='relu', padding='same', name="fc6", data_format=IMAGE_ORDERING))(pool5)
    conv7 = (Conv2D(n, (1, 1), activation='relu', padding='same', name="fc7", data_format=IMAGE_ORDERING))(o)

    conv7 = (Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="conv7_1", data_format=IMAGE_ORDERING))(conv7)

    conv7_4 = Conv2DTranspose(nClasses, kernel_size=(2, 2), strides=(2, 2),  data_format=IMAGE_ORDERING)(
        conv7)

    pool411 = (
        Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="pool4_11",use_bias=False, data_format=IMAGE_ORDERING))(pool4)

    o = Add(name="add")([pool411, conv7_4])

    o = Conv2DTranspose(nClasses, kernel_size=(16, 16), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(o)
    o = (Activation('softmax'))(o)

    GDI= Model(img_input, o)
    GDI.load_weights(Model_Weights_path)

    model = Model(img_input, o)

    return model

然后我进行了训练、测试拆分并尝试将模型运行为:

from keras import optimizers

sgd = optimizers.SGD(lr=1E-2, momentum=0.91,decay=5**(-4), nesterov=True)

model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'],)

hist1 = model.fit(X_train,y_train,validation_data=(X_test,y_test),batch_size=32,epochs=1000,verbose=2)

model.save("/content/drive/My Drive/HCI_prep/new.h5")

但是这段代码在第一个纪元中抛出错误:

NotFoundError:发现 2 个根错误。 (0) 未找到:没有算法起作用! [[{{节点池4_11_3/Conv2D}}]] [[loss_4/mul/_629]] (1) 未找到:没有算法起作用! [[{{节点池4_11_3/Conv2D}}]] 0 次成功操作。 0 个派生错误被忽略。

enter image description here

tensorflow keras deep-learning computer-vision transfer-learning
9个回答
12
投票

将以下内容添加到您的代码中:

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

然后重新启动python内核。


9
投票

有同样的问题。

MaxPooling 的

padding='same'
对我不起作用。

我将训练和测试生成器中的

color_mode
参数从
'rgb'
更改为
'grayscale'
,然后它对我有用。


8
投票

这对我有用:

    import tensorflow as tf
    physical_devices = tf.config.list_physical_devices('GPU')
    tf.config.experimental.set_memory_growth(physical_devices[0], True)

6
投票

我的问题是我用 input_shape 为 (?,28,28,1) 调用模型,后来用 (?,28,28,3) 调用它。


5
投票

就我而言,这个问题是通过结束所有仍在其中一个 GPU 上分配内存的进程来解决的。显然,其中一个没有(正确地)完成。我不必更改任何代码。


1
投票

供参考,修复错误的完整代码如下:

import tensorflow.keras
from tensorflow.keras.models import *
from tensorflow.keras.layers import *

IMAGE_ORDERING = 'channels_last'

# take vgg-16 pretrained model from "https://github.com/fchollet/deep-learning-models" here
pretrained_url = "https://github.com/fchollet/deep-learning-models/" \
                 "releases/download/v0.1/" \
                 "vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"

pretrained = 'imagenet'  # 'imagenet' if weights need to be initialized!

"""
Function Name: get_vgg_encoder()
Functionalities: This function defines the VGG encoder part of the FCN network
                 and initialize this encoder part with VGG pretrained weights.
Parameter:input_height=224,  input_width=224, pretrained=pretrained
Returns: final layer of every blocks as f1,f2,f3,f4,f5
"""


def get_vgg_encoder(input_height=224, input_width=224, pretrained=pretrained):
    pad = 1

    # heights and weights must be divided by 32, for fcn
    assert input_height % 32 == 0
    assert input_width % 32 == 0

    img_input = Input(shape=(input_height, input_width, 3))

    # Unlike base paper, stride=1 has not been used here, because
    # Keras has default stride=1

    x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='valid', name='block1_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
    f1 = x
    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
    f2 = x

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
    x = Dropout(0.5)(x)
    f3 = x

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(x)
    f4 = x

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(x)
    # x= Dropout(0.5)(x)

    f5 = x

    # Check if weights are initialised, model is learning!
    if pretrained == 'imagenet':
        VGG_Weights_path = tensorflow.keras.utils.get_file(
            pretrained_url.split("/")[-1], pretrained_url)

        Model(img_input, x).load_weights(VGG_Weights_path)

    return img_input, [f1, f2, f3, f4, f5]


"""
Function Name: fcn_16()
Functionalities: This function defines the Fully Convolutional part of the FCN network
                 and adds skip connections to build FCN-16 network
Parameter:n_classes, encoder=get_vgg_encoder, input_height=224,input_width=224
Returns: model
"""


def fcn_16(n_classes, encoder=get_vgg_encoder, input_height=224, input_width=224):
    # Take levels from the base model, i.e. vgg
    img_input, levels = encoder(input_height=input_height, input_width=input_width)
    [f1, f2, f3, f4, f5] = levels

    o = f5

    # fcn6
    o = (Conv2D(4096, (7, 7), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)

    # fc7
    o = (Conv2D(4096, (1, 1), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.3)(o)

    conv7 = (Conv2D(1, (1, 1), activation='relu', padding='same', name="score_sal", data_format=IMAGE_ORDERING))(o)

    conv7_4 = Conv2DTranspose(1, kernel_size=(4, 4), strides=(2, 2), padding='same', name="upscore_sal2",
                              use_bias=False, data_format=IMAGE_ORDERING)(conv7)

    pool411 = (
        Conv2D(1, (1, 1), activation='relu', padding='same', name="score_pool4", data_format=IMAGE_ORDERING))(f4)

    # Add a crop layer 
    o, o2 = crop(pool411, conv7_4, img_input)

    # add skip connection
    o = Add()([o, o2])

    # 16 x upsample
    o = Conv2DTranspose(n_classes, kernel_size=(32, 32), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(
        o)

    # crop layer
    ## Caffe calls crop layer that takes o and img_input as argument, it takes their difference and crops
    ## But keras takes it as touple, I checked the size diff and put this value manually.
    ## output dim was 240 , input dim was 224. 240-224=16. so 16/2=8

    score = Cropping2D(cropping=((8, 8), (8, 8)), data_format=IMAGE_ORDERING)(o)

    o = (Activation('sigmoid'))(score)
    model = Model(img_input, o)

    model.model_name = "fcn_16"

    return model




0
投票

这个错误非常普遍,基本上表明“某些地方”出了问题。因为,各种答案表明错误可能是由于实现与 keras/tensorflow 底层版本不兼容而引起的,或者过滤器大小不正确,或者或者或者...

对此没有单一的解决方案。对我来说,这也是一个输入形状问题。使用

rgb
而不是使用
grayscale
,因为网络预期有 1 个通道。


0
投票

就像@Soren所说,这个错误取决于各种情况。发生这种情况的原因可能是 VRAM 缺陷、v1 和 v2 不兼容、调用

conv
pool
时的形状问题等。

就我而言,出现此错误是因为我在 Python 3.6(和 TF 2.4.2)中调用保存的模型进行推理,而我的模型是在 Python 3.10(和 TF 2.8)中训练的,并且在中完成了一些灵活的形状操作Keras/TF 的功能 API 不向后兼容旧版本的 TF。

因此,我还建议检查您的训练和推理环境,以确保没有错误或不匹配。


0
投票

这里的情况并非如此,但我得到了同样的错误,并且上述解决方案都不起作用,事实上,这是因为在我的最后一个卷积中,滤波器的数量是 3 而不是 1(对于灰度)。并提出(奇怪的)相同的错误。

如果您遇到此错误,请快速查看您的上一次转化。

© www.soinside.com 2019 - 2024. All rights reserved.