Keras模型训练记忆泄漏

问题描述 投票:0回答:1

我是Keras,Tensorflow,Python的新手,我正在尝试建立用于个人使用/未来学习的模型。我刚开始使用python,并想出了这段代码(在视频和教程的帮助下)。我的问题是我的Python内存使用量在每个时期甚至在构建新模型后都在缓慢增加。一旦内存达到100%,训练就会立即停止,而不会出现错误/警告。我不太了解,但是问题应该在循环内(如果我没记错的话)。我知道

k.clear.session()

但是该问题未消除,或者我不知道如何将其集成到我的代码中。我有:Python v 3.6.4,Tensorflow 2.0.0rc1(cpu版本),Keras 2.3.0

这是我的代码:

import pandas as pd
import os
import time
import tensorflow as tf
import numpy as np
import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint

EPOCHS = 25
BATCH_SIZE = 32           

df = pd.read_csv("EntryData.csv", names=['1SH5', '1SHA', '1SA5', '1SAA', '1WH5', '1WHA',
                                         '2SA5', '2SAA', '2SH5', '2SHA', '2WA5', '2WAA',
                                         '3R1', '3R2', '3R3', '3R4', '3R5', '3R6',
                                         'Target'])

df_val = 14554 

validation_df = df[df.index > df_val]
df = df[df.index <= df_val]

train_x = df.drop(columns=['Target'])
train_y = df[['Target']]
validation_x = validation_df.drop(columns=['Target'])
validation_y = validation_df[['Target']]

train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
validation_x = np.asarray(validation_x)
validation_y = np.asarray(validation_y)

train_x = train_x.reshape(train_x.shape[0], 1, train_x.shape[1])
validation_x = validation_x.reshape(validation_x.shape[0], 1, validation_x.shape[1])

dense_layers = [0, 1, 2]
layer_sizes = [32, 64, 128]
conv_layers = [1, 2, 3]

for dense_layer in dense_layers:
    for layer_size in layer_sizes:
        for conv_layer in conv_layers:
            NAME = "{}-conv-{}-nodes-{}-dense-{}".format(conv_layer, layer_size, 
                    dense_layer, int(time.time()))
            tensorboard = TensorBoard(log_dir="logs\{}".format(NAME))
            print(NAME)

            model = Sequential()
            model.add(LSTM(layer_size, input_shape=(train_x.shape[1:]), 
                                       return_sequences=True))
            model.add(Dropout(0.2))
            model.add(BatchNormalization())

            for l in range(conv_layer-1):
                model.add(LSTM(layer_size, return_sequences=True))
                model.add(Dropout(0.1))
                model.add(BatchNormalization())

            for l in range(dense_layer):
                model.add(Dense(layer_size, activation='relu'))
                model.add(Dropout(0.2))

            model.add(Dense(2, activation='softmax'))

            opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)

            # Compile model
            model.compile(loss='sparse_categorical_crossentropy',
                          optimizer=opt,
                          metrics=['accuracy'])

            # unique file name that will include the epoch 
            # and the validation acc for that epoch
            filepath = "RNN_Final.{epoch:02d}-{val_accuracy:.3f}"  
            checkpoint = ModelCheckpoint("models\{}.model".format(filepath, 
                         monitor='val_acc', verbose=0, save_best_only=True, 
                         mode='max')) # saves only the best ones

            # Train model
            history = model.fit(
                train_x, train_y,
                batch_size=BATCH_SIZE,
                epochs=EPOCHS,
                validation_data=(validation_x, validation_y),
                callbacks=[tensorboard, checkpoint])

# Score model
score = model.evaluate(validation_x, validation_y, verbose=2)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Save model
model.save("models\{}".format(NAME))

而且我也不知道是否可以在1个问题内提出2个问题(我不想在这里将我的问题发送给我,任何有python经验的人都可以在一分钟之内解决),但是我也有保存检查点的问题。我只想保存性能最好的模型(每1 NN规范1个模型-节点/层数),但当前在每个时期后都保存。如果这不合适,我可以为此提出另一个问题。

非常感谢您的帮助。

python tensorflow memory keras checkpoint
1个回答
0
投票

问题的一个根源是,model = Sequential()的新循环确实not删除了先前的模型;它仍然构建在其TensorFlow图范围内,并且每个新model = Sequential()都会添加另一种挥之不去的构造,最终会导致内存溢出。为了确保模型可以正确地完整销毁,请在完成模型后运行以下命令:

import gc
del model
gc.collect()
K.clear_session()

gc是Python的垃圾回收模块,它清除model之后的del残留痕迹。 K.clear_session()是主要调用,并清除TensorFlow图。

而且,虽然您关于模型检查点,日志记录和超参数搜索的想法很合理,但是执行起来却很错误;您实际上将只对在此处设置的整个嵌套循环测试one超参数组合。但这应该在一个单独的问题中提出。

© www.soinside.com 2019 - 2024. All rights reserved.