是否有一种方法可以在带有张量流的时期中间停止训练?

问题描述 投票:3回答:1

只是想知道是否有一种方法可以在某个时期的中间保存最高的准确性和最低的损失,并将其用作下一个时期的得分前进。通常,我的数据可以达到43.56%的准确度,但是我已经看到,在某个时期的中间,数据会一直上升到46%以上。我有什么办法可以在那个时候停止时代,并用它来推动比分向前发展吗?

这是我现在正在运行的代码

import pandas as pd
import numpy as np
import pickle
import random
from skopt import BayesSearchCV
from sklearn.neural_network import MLPRegressor
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, Bidirectional, SimpleRNN, GRU
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras import layers
import tensorflow_docs as tfdocs
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import tensorflow_docs.modeling
from tensorflow import keras
import warnings
warnings.filterwarnings("ignore")
warnings.filterwarnings('ignore', category=DeprecationWarning)

train_df = avenues[["LFA's", "Spend"]].sample(frac=0.8,random_state=0)
test_df = avenues[["LFA's", "Spend"]].drop(train_df.index)
train_df = clean_dataset(train_df)
test_df = clean_dataset(test_df)
train_df = train_df.reset_index(drop=True)
test_df = test_df.reset_index(drop=True)
train_stats = train_df.describe()
train_stats = train_stats.pop("LFA's")
train_stats = train_stats.transpose()
train_labels = train_df.pop("LFA's").values
test_labels = test_df.pop("LFA's").values
normed_train_data = np.array(norm(train_df)).reshape((train_df.shape[0], 1, 1))
normed_test_data = np.array(norm(test_df)).reshape((test_df.shape[0], 1, 1))
model = KerasRegressor(build_fn=build_model, epochs=25, 
                                   batch_size=1, verbose=0)
gs = BayesSearchCV(model, param_grid, cv=3, n_iter=25, n_jobs=1,
                               optimizer_kwargs={'base_estimator': 'RF'},
                               fit_params={"callbacks": [es_acc, es_loss, tfdocs.modeling.EpochDots()]})
try:
     gs.fit(normed_train_data, train_labels)
except Exception as e:
     print(e)
python keras early-stopping
1个回答
0
投票

尝试使用train_on_batch代替fit。这样,您可以控制自己的纪元在每批想要的批次之后停止(尽管从机器学习的角度来看,我怀疑这是否是一个好主意,最后您会得到通用性较低的模型)。


0
投票

您可以写custom keras callback来调整您在训练的中间阶段的训练。

© www.soinside.com 2019 - 2024. All rights reserved.