Tensorflow在compute_gradients处挂起

问题描述 投票:0回答:3

我有一个问题,我的Tensorflow执行卡在compute_gradients。我正在初始化我的模型,然后像这样设置损失函数。请注意,此时我还没有开始训练所以问题不是我的数据:

# The model for training
given_model = GivenModel(images_input=images_t)

print("Done setting up the model")

with tf.device('/gpu:0'):
    with tf.variable_scope('prediction_loss'):
        logits = given_model.prediction

        softmax_loss_per_sample = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels))


        total_training_loss = softmax_loss_per_sample

        optimizer = tf.train.AdamOptimizer()
        gradients, variables = zip(*optimizer.compute_gradients(total_training_loss))
        gradients, _ = tf.clip_by_global_norm(gradients, gradient_clip_threshold)
        optimize = optimizer.apply_gradients(zip(gradients, variables))


    with tf.control_dependencies([optimize]):
        train_op = tf.constant(0)

这段代码只是挂起而且什么都不做。当我ctrl + c出来时(无论运行多长时间),它总是卡在compute_gradients上。

有谁知道为什么会这样?我不是在循环中做这个,我的模型不是那么大。它似乎也在使用CPU来执行此操作(尚未在GPU上分配内存),尽管有with tf.device('/gpu:0'):选项,但我无法强制它使用GPU。

谢谢

这是我做ctrl + c时打印的内容:

gradients, variables = zip(*optimizer.compute_gradients(total_training_loss))
  File ".local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 35$
, in compute_gradients
    colocate_gradients_with_ops=colocate_gradients_with_ops)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 48$
, in gradients
    in_grads = grad_fn(op, *out_grads)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/nn_grad.py", line 269, in _$
eluGrad
    return gen_nn_ops._relu_grad(grad, op.outputs[0])
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2212, $
n _relu_grad
    features=features, name=name)
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", l$
ne 763, in apply_op
    op_def=op_def)
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, i$
 create_op
    original_op=self._default_original_op, op_def=op_def)
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1268, i$
 __init__
    self._control_flow_context.AddOp(self)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line $
039, in AddOp
    self._AddOpInternal(op)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line $
062, in _AddOpInternal
    real_x = self.AddValue(x)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line $
998, in AddValue
    real_val = grad_ctxt.grad_state.GetRealValue(val)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line $
001, in GetRealValue
    history_value = cur_grad_state.AddForwardAccumulator(cur_value)
  File ".local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 8
92, in AddForwardAccumulator
    self.forward_index.op._add_control_input(push.op)
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1434, in
 _add_control_input
    self._add_control_inputs([op])
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1422, in
 _add_control_inputs
    self._recompute_node_def()
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1442, in
 _recompute_node_def
    self._control_inputs])
  File ".local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1317, in
 name
    return self._node_def.name
KeyboardInterrupt
python tensorflow
3个回答
1
投票

如果此时你没有开始训练,也许它与图形结构有关。你确定GivenModel是正确的吗?因为我将此自动编码器example与您对优化器的定义进行了如下调整,我在执行此代码时没有发现任何问题:

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Training Parameters
learning_rate = 0.01
num_steps = 10
batch_size = 8

# Network Parameters
num_hidden_1 = 256 # 1st layer num features

num_hidden_2 = 128 # 2nd layer num features (the latent dim)
num_input = 784 # MNIST data input (img shape: 28*28)

# tf Graph input (only pictures)
X = tf.placeholder("float", [None, num_input])

weights = {
    'encoder_h1': tf.Variable(tf.random_normal([num_input, num_hidden_1])),
    'encoder_h2': tf.Variable(tf.random_normal([num_hidden_1, num_hidden_2])),
    'decoder_h1': tf.Variable(tf.random_normal([num_hidden_2, num_hidden_1])),
    'decoder_h2': tf.Variable(tf.random_normal([num_hidden_1, num_input])),
}
biases = {
    'encoder_b1': tf.Variable(tf.random_normal([num_hidden_1])),
    'encoder_b2': tf.Variable(tf.random_normal([num_hidden_2])),
    'decoder_b1': tf.Variable(tf.random_normal([num_hidden_1])),
    'decoder_b2': tf.Variable(tf.random_normal([num_input])),
}

# Building the encoder
def encoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
                                   biases['encoder_b1']))
    # Encoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
                                   biases['encoder_b2']))
    return layer_2


# Building the decoder
def decoder(x):
    # Decoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
                                   biases['decoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
                                   biases['decoder_b2']))
    return layer_2

# Construct model
encoder_op = encoder(X)
decoder_op = decoder(encoder_op)

# Prediction
y_pred = decoder_op
# Targets (Labels) are the input data.
y_true = X

# Define loss and optimizer, minimize the squared error
### your code with a reconstruction loss
with tf.device('/gpu:0'):
    with tf.variable_scope('prediction_loss'):

        loss = tf.reduce_mean(tf.pow(y_true - y_pred, 2))

        optimizer = tf.train.AdamOptimizer()
        gradients, variables = zip(*optimizer.compute_gradients(loss))
        gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
        optimize = optimizer.apply_gradients(zip(gradients, variables))

    with tf.control_dependencies([optimize]):
        train_op = tf.constant(0)
### end of your code

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start Training
# Start a new TF session
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)

    # Training
    for i in range(1, num_steps+1):
        # Prepare Data
        # Get the next batch of MNIST data (only images are needed, not labels)
        batch_x, _ = mnist.train.next_batch(batch_size)

        # Run optimization op (backprop) and cost op (to get loss value)
        _, l = sess.run([train_op, loss], feed_dict={X: batch_x})
        # Display logs per step
        print('Step %i: Minibatch Loss: %f' % (i, l))

所以,我认为问题可能与模型的其余部分有关,但是要确保我们需要模型的更多细节。

现在,关于模型的放置是在cpu还是gpu中。如果您没有在cpu上定义任何内容,将自动为您选择gpu设备。因此,从理论上讲,模型将自动分配到gpu上。但是,也许图形结构可能存在问题,并且当模型实际分配在gpu内存中时它没有达到这一点。


0
投票

对我来说问题是模型太大了。使它变小可以解决问题。


0
投票

我遇到了这个问题有三个原因:

  1. 模型到大,所以减少批量大小
  2. 没有渐变的var: clone_gradients = optimizer.compute_gradients(total_clone_loss) for grad_and_vars in zip(*clone_grads): tf.logging.info("clone_grads"+str(grad_and_vars)) 它打印: INFO:tensorflow:在clone_grads之后((,),)INFO:tensorflow:在clone_grads之后((None,),)
© www.soinside.com 2019 - 2024. All rights reserved.