为什么停止学习???我在张量流模型中使用了 Earlystopping 回调

问题描述 投票:0回答:1

我有疑问 我将 EarlyStoping 回调应用于我的模型以限制纪元。 到目前为止,我知道当我设置的指标没有持续改善时,这个回调会自动停止学习,但学习会在我不想要的点处停止。如果您能告诉我原因是什么,我将不胜感激。

[我的代码]

from keras.callbacks import EarlyStopping, LearningRateScheduler, ModelCheckpoint
def early_stopping(patience=5, monitor="val_loss"):
    callback = EarlyStopping(monitor=monitor, patience=5)
    return callback

def lr_scheduler(epoch=10, ratio=0.1):
    """
    epoch 개수 이후로 lr 1/10씩 진행
    """

    def lr_scheduler_func(e, lr):
        if e < epoch:
            return lr
        else:
            return lr * ratio

    callback = LearningRateScheduler(lr_scheduler_func)
    return callback

def checkpoint(
    filepath,
    monitor="val_accuracy",
    save_best_only=True,
    mode="max",
    save_weights_only=True,
):
    callback = ModelCheckpoint(
        filepath=filepath,
        monitor=monitor,  # 모니터링할 지표 설정
        verbose=1,
        save_best_only=save_best_only,  # 가장 좋은 성능을 보인 모델만 저장
        mode=mode,  # 최대값을 갖는 경우를 모니터링
        save_weights_only=save_weights_only,  # 전체 모델을 저장 (아키텍처와 가중치 모두 저장)
    )
    return callback

EarlyStopping = early_stopping()
LearningRateScheduler = lr_scheduler(20)
CheckPoint = checkpoint("./epic_models/DN_TL_230909_2.h5")
history = model.fit(
    train_data,
    epochs=50,
    validation_data=valid_data,
    callbacks=[EarlyStopping, LearningRateScheduler, CheckPoint],
)


[输出]

Epoch 1/50
2023-09-09 22:42:38.446232: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
416/416 [==============================] - ETA: 0s - loss: 1.1175 - accuracy: 0.66372023-09-09 22:44:14.590122: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.

Epoch 1: val_accuracy improved from -inf to 0.78931, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 124s 289ms/step - loss: 1.1175 - accuracy: 0.6637 - val_loss: 0.5924 - val_accuracy: 0.7893 - lr: 0.0010
Epoch 2/50
416/416 [==============================] - ETA: 0s - loss: 0.4517 - accuracy: 0.8430
Epoch 2: val_accuracy improved from 0.78931 to 0.83670, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 143s 344ms/step - loss: 0.4517 - accuracy: 0.8430 - val_loss: 0.4349 - val_accuracy: 0.8367 - lr: 0.0010
Epoch 3/50
416/416 [==============================] - ETA: 0s - loss: 0.3435 - accuracy: 0.8760
Epoch 3: val_accuracy improved from 0.83670 to 0.83972, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 164s 394ms/step - loss: 0.3435 - accuracy: 0.8760 - val_loss: 0.3872 - val_accuracy: 0.8397 - lr: 0.0010
Epoch 4/50
416/416 [==============================] - ETA: 0s - loss: 0.2851 - accuracy: 0.8946
Epoch 4: val_accuracy improved from 0.83972 to 0.86115, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 178s 428ms/step - loss: 0.2851 - accuracy: 0.8946 - val_loss: 0.3451 - val_accuracy: 0.8612 - lr: 0.0010
Epoch 5/50
416/416 [==============================] - ETA: 0s - loss: 0.2453 - accuracy: 0.9057
Epoch 5: val_accuracy improved from 0.86115 to 0.87534, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 188s 452ms/step - loss: 0.2453 - accuracy: 0.9057 - val_loss: 0.3179 - val_accuracy: 0.8753 - lr: 0.0010
Epoch 6/50
416/416 [==============================] - ETA: 0s - loss: 0.2240 - accuracy: 0.9113
Epoch 6: val_accuracy improved from 0.87534 to 0.88711, saving model to ./epic_models/DN_TL_230909_2.h5
416/416 [==============================] - 184s 444ms/step - loss: 0.2240 - accuracy: 0.9113 - val_loss: 0.2909 - val_accuracy: 0.8871 - lr: 0.0010
Epoch 7/50
416/416 [==============================] - ETA: 0s - loss: 0.2000 - accuracy: 0.9212
Epoch 7: val_accuracy did not improve from 0.88711
416/416 [==============================] - 191s 459ms/step - loss: 0.2000 - accuracy: 0.9212 - val_loss: 0.3114 - val_accuracy: 0.8775 - lr: 0.0010
Epoch 8/50
416/416 [==============================] - ETA: 0s - loss: 0.1830 - accuracy: 0.9280
Epoch 8: val_accuracy did not improve from 0.88711
416/416 [==============================] - 193s 463ms/step - loss: 0.1830 - accuracy: 0.9280 - val_loss: 0.3300 - val_accuracy: 0.8723 - lr: 0.0010
Epoch 9/50
416/416 [==============================] - ETA: 0s - loss: 0.1666 - accuracy: 0.9324
Epoch 9: val_accuracy did not improve from 0.88711
416/416 [==============================] - 198s 476ms/step - loss: 0.1666 - accuracy: 0.9324 - val_loss: 0.3219 - val_accuracy: 0.8787 - lr: 0.0010
Epoch 10/50
416/416 [==============================] - ETA: 0s - loss: 0.1579 - accuracy: 0.9335
Epoch 10: val_accuracy did not improve from 0.88711
416/416 [==============================] - 201s 483ms/step - loss: 0.1579 - accuracy: 0.9335 - val_loss: 0.3707 - val_accuracy: 0.8596 - lr: 0.0010
Epoch 11/50
416/416 [==============================] - ETA: 0s - loss: 0.1477 - accuracy: 0.9401
Epoch 11: val_accuracy did not improve from 0.88711
416/416 [==============================] - 202s 486ms/step - loss: 0.1477 - accuracy: 0.9401 - val_loss: 0.3081 - val_accuracy: 0.8832 - lr: 0.0010
tensorflow keras callback early-stopping
1个回答
0
投票

我很抱歉问这个问题。 我发现持续的政策取决于

BEST score
如果连续 5 个 epoch 的结果低于最佳 val_loss,则学习结束。 谢谢你。

© www.soinside.com 2019 - 2024. All rights reserved.