我想在Keras的进度条和Tensorboard中监控训练过程中的学习率。我想一定有一种方法可以指定哪些变量被记录下来,但是在Keras上并没有立即澄清这个问题。网站.
我想这和创建一个自定义的 回调 函数,但是,应该可以修改已经存在的进度条回调,是吗?
可以通过自定义指标来实现。以学习率为例。
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)
optimizer = Adam(lr=0.001)
lr_metric = get_lr_metric(optimizer)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['acc', lr_metric])
# reducing the learning rate by half every 2 epochs
cbks = [LearningRateScheduler(lambda epoch: 0.001 * 0.5 ** (epoch // 2)),
TensorBoard(write_graph=False)]
X = np.random.rand(1000, 50)
Y = np.random.randint(2, size=1000)
model.fit(X, Y, epochs=10, callbacks=cbks)
学习率会被打印在进度条上。
Epoch 1/10
1000/1000 [==============================] - 0s 103us/step - loss: 0.8228 - acc: 0.4960 - lr: 0.0010
Epoch 2/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7305 - acc: 0.4970 - lr: 0.0010
Epoch 3/10
1000/1000 [==============================] - 0s 62us/step - loss: 0.7145 - acc: 0.4730 - lr: 5.0000e-04
Epoch 4/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7129 - acc: 0.4800 - lr: 5.0000e-04
Epoch 5/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7124 - acc: 0.4810 - lr: 2.5000e-04
Epoch 6/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7123 - acc: 0.4790 - lr: 2.5000e-04
Epoch 7/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7119 - acc: 0.4840 - lr: 1.2500e-04
Epoch 8/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7117 - acc: 0.4880 - lr: 1.2500e-04
Epoch 9/10
1000/1000 [==============================] - 0s 59us/step - loss: 0.7116 - acc: 0.4880 - lr: 6.2500e-05
Epoch 10/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7115 - acc: 0.4880 - lr: 6.2500e-05
然后,你可以在TensorBoard中可视化LR曲线。
另一种方式(事实上 鼓舞),如何将自定义值传递给TensorBoard,是通过子封装的 keras.callbacks.TensorBoard
类。这允许您应用自定义函数来获取所需的指标,并直接将其传递给 TensorBoard。
下面是一个关于学习率的例子 Adam
优化器。
class SubTensorBoard(TensorBoard):
def __init__(self, *args, **kwargs):
super(SubTensorBoard, self).__init__(*args, **kwargs)
def lr_getter(self):
# Get vals
decay = self.model.optimizer.decay
lr = self.model.optimizer.lr
iters = self.model.optimizer.iterations # only this should not be const
beta_1 = self.model.optimizer.beta_1
beta_2 = self.model.optimizer.beta_2
# calculate
lr = lr * (1. / (1. + decay * K.cast(iters, K.dtype(decay))))
t = K.cast(iters, K.floatx()) + 1
lr_t = lr * (K.sqrt(1. - K.pow(beta_2, t)) / (1. - K.pow(beta_1, t)))
return np.float32(K.eval(lr_t))
def on_epoch_end(self, episode, logs = {}):
logs.update({"lr": self.lr_getter()})
super(SubTensorBoard, self).on_epoch_end(episode, logs)