使用tf.train.exponential_decay和预定义的估算器?

问题描述 投票:6回答:1

我正在尝试将tf.train.exponential_decay与预定义的估算器结合使用,由于某种原因,这被证明非常困难。我在这里想念什么吗?

这是我的老代码,学习率恒定:

classifier = tf.estimator.DNNRegressor(
    feature_columns=f_columns,
    model_dir='./TF',
    hidden_units=[2, 2],
    optimizer=tf.train.ProximalAdagradOptimizer(
      learning_rate=0.50,
      l1_regularization_strength=0.001,
    ))

现在我尝试添加此内容:

starter_learning_rate = 0.50
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
                                           10000, 0.96, staircase=True)

但是现在呢?

  • estimator.predict()不接受global_step,因此它将停留在0?
  • 即使我将learning_rate传递给tf.train.ProximalAdagradOptimizer(),我也会收到一条错误消息:

“ ValueError:Tensor(” ExponentialDecay:0“,shape =(),dtype = float32)必须来自与同一张图张量(“ dnn / hiddenlayer_0 / kernel / part_0:0”,shape =(62,2),dtype = float32_ref)。“

非常感谢您的帮助。我正在使用TF1.6 btw。

tensorflow tensorflow-estimator
1个回答
0
投票

您应该让优化器处于== tf.estimator.ModeKeys.TRAIN模式

这里是示例代码

def _model_fn(features, labels, mode, config):

    # xxxxxxxxx
    # xxxxxxxxx

    assert mode == tf.estimator.ModeKeys.TRAIN

    global_step = tf.train.get_global_step()
    decay_learning_rate = tf.train.exponential_decay(learning_rate, global_step, 100, 0.98, staircase=True)
    optimizer = adagrad.AdagradOptimizer(decay_learning_rate)

    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    with tf.control_dependencies(update_ops):
         train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op, training_chief_hooks=chief_hooks, eval_metric_ops=metrics)
© www.soinside.com 2019 - 2024. All rights reserved.