包含EfficientNet子模型的Keras模型不能加载权重;VGG工作正常。

问题描述 投票:1回答:1

我有一个ubermodel,使用一个子模型作为特征提取层。我的代码是模块化的,因此我可以很容易地切换我使用的子模型来执行特征提取,只需改变我分配的任何一个子模型。

...

elif FEATURE_EXTRACTOR == "VGG16":

    Features = keras.applications.VGG16(

        weights = "imagenet",
        pooling = FEATURE_POOLING,
        include_top = False

    )
elif FEATURE_EXTRACTOR == "EfficientNetB0":

    Features = keras_applications_master.keras_applications.efficientnet.EfficientNetB0(
    # ^ Local copy of official keras repo: https://github.com/keras-team/keras-applications
    # because pip install --upgrade keras doesn't install version with efficientnet.

        weights = "imagenet",
        include_top = False,
        pooling = FEATURE_POOLING,
        classes = None

    )

...

我保存和加载ubermodel及其权重的例程也知道哪个子模型被用于特征提取。

model.load_weights(submodel_specific_path)

对于任何子模型,我都可以进行初始训练并将ubermodel保存到磁盘上。而如果我尝试继续训练或微调任何包含VGG16子模型的ubermodel,使用 load_weights 加载砝码,一切都很正常。但当我 load_weights 任何带有效率网络子模型的ubermodel(或者说,keras.application.xception.Xception),我得到了以下错误。

Traceback (most recent call last):
  File "image_model.py", line 284, in <module>
    model.load_weights(model_path)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\saving.py", line 492, in load_wrapper
    return load_function(*args, **kwargs)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\network.py", line 1227, in load_weights
    reshape=reshape)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\saving.py", line 1294, in load_weights_from_hdf5_group_by_name
    reshape=reshape)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\saving.py", line 861, in preprocess_weights_for_loading
    weights = convert_nested_model(weights)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\saving.py", line 836, in convert_nested_model
    original_backend=original_backend))
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\keras\engine\saving.py", line 980, in preprocess_weights_for_loading
    weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
  File "<__array_function__ internals>", line 6, in transpose
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\numpy\core\fromnumeric.py", line 651, in transpose
    return _wrapfunc(a, 'transpose', axes)
  File "C:\Users\Username\Anaconda3\envs\tensorflow1\lib\site-packages\numpy\core\fromnumeric.py", line 61, in _wrapfunc
    return bound(*args, **kwds)
ValueError: axes don't match array

我到底做错了什么?

python tensorflow keras valueerror efficientnet
1个回答
0
投票

我无法重新创建你的问题,但我发现了这一点。Github问题这说明当我们使用多GPU模型时就会出现这个问题。所以技巧应该是。

multi_gpu_model = multi_gpu_model(model, gpus=G)
model.save_model(...)

取而代之的是...

multi_gpu_model = multi_gpu_model(model, gpus=G)
multi_gpu_model.save_model(...)

说到这里,我能够成功地创建了 EfficientNetB0 模型,评估模型,保存模型,最后加载模型。

使用model.save建立、评估和保存模型的代码--。

%tensorflow_version 1.x
import keras
from keras.models import Model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from keras_efficientnets import EfficientNetB0

model = EfficientNetB0(input_shape=(224, 224, 3), classes=1000, include_top=False, weights='imagenet')

x = model.output
x = GlobalAveragePooling2D()(x)
x = Dense(20, activation='relu')(x)
x = Dense(17, activation='softmax')(x)
model = Model(inputs = model.input, outputs = x)

# summarize model
#model.summary()

# (2) Get Data
import tflearn.datasets.oxflower17 as oxflower17
x, y = oxflower17.load_data(one_hot=True)

# (4) Compile 
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(x, y, batch_size=64, epochs= 1, verbose=1, validation_split=0.2, shuffle=True)

# evaluate the model
scores = model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file
model.save("model.h5py")
print("Saved model to disk")

产出 -

TensorFlow 1.x selected.
Using TensorFlow backend.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/helpers/summarizer.py:9: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/helpers/trainer.py:25: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/collections.py:13: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:123: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:129: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:131: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

Downloading Oxford 17 category Flower Dataset, Please wait...
100.0% 60276736 / 60270631
('Succesfully downloaded', '17flowers.tgz', 60270631, 'bytes.')
File Extracted
Starting to parse images...
Parsing Done!
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:431: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:438: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

Train on 1088 samples, validate on 272 samples
Epoch 1/1
1088/1088 [==============================] - 203s 187ms/step - loss: 1.6433 - accuracy: 0.5561 - val_loss: 1.9315 - val_accuracy: 0.5074
accuracy: 54.85%

使用 load_model 加载模型,然后评估 -------。

%tensorflow_version 1.x
# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model

# load model
model = load_model('model.h5py')

# summarize model
#model.summary()

# (2) Get Data
import tflearn.datasets.oxflower17 as oxflower17
x, y = oxflower17.load_data(one_hot=True)

# evaluate the model
score = model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))

产出 - 保存前和加载后模型的精度应该是一致的,这里也是如此。

accuracy: 54.85%

使用model.save_weights保存模型,使用model.load_weights加载模型--。

from keras.models import model_from_json

# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
    json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")

# later...

# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")

# evaluate loaded model on test data
loaded_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
score = loaded_model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))

产出 -

Saved model to disk
Loaded model from disk
accuracy: 54.85%

希望能回答你的问题。祝您学习愉快。

© www.soinside.com 2019 - 2024. All rights reserved.