Tensorflow:在循环中构建图形时出现内存错误

问题描述 投票:-1回答:1

我想修改现有模型并测试预测。因此,我构建一个图形,测试它,然后我构建下一个图形。我在for循环中这样做。更详细地说,在get_new_graph()中,我加载了一个预先训练好的VGG16模型,我在网络中添加了一个单层。根据我选择的test,最后一层的大小会有所不同。

import vgg
slim = tf.contrib.slim

def experiment():
    for test in tests:
        tf.reset_default_graph()
        X, new_pred = get_new_graph(test) # load VGG16 model + add layer
        variables_to_restore = slim.get_variables_to_restore()
        saver = tf.train.Saver(variables_to_restore)
        with tf.Session() as sess:
            saver.restore(sess, './vgg16.ckpt')
            for k in range(100):
                R = sess.run(new_pred, feed_dict={X:images})
                print(R)
            sess.close()

当我运行此代码时,我可以使用来自imagenet的1000张图片进行三次测试。然后我因GPU内存已满而出现内存错误:

W tensorflow/core/common_runtime/bfc_allocator.cc:267]
Allocator (GPU_0_bfc) ran out of memory trying to 
allocate 64.00MiB.  Current allocation summary follows.

如何修改我的代码以使其运行?

python tensorflow
1个回答
1
投票

正如本期TensorFlow github:http://github.com/tensorflow/tensorflow/issues/17048中提到的那样

似乎可以在不同的进程中创建每个会话,以便在进程终止时释放gpu。

它可能看起来像:

from multiprocessing import Pool

def _process(X, new_pred, images):
    with tf.Session() as sess:
        saver.restore(sess, './vgg16.ckpt')
        for k in range(100):
            R = sess.run(new_pred, feed_dict={X:images})
            print(R)
        sess.close()


def experiment():
    for test in tests:
        tf.reset_default_graph()
        X, new_pred = get_new_graph(test) # load VGG16 model + add layer
        variables_to_restore = slim.get_variables_to_restore()
        saver = tf.train.Saver(variables_to_restore)
        with Pool(1) as p:
            return p.apply(_process, (X, new_pred, images,))
© www.soinside.com 2019 - 2024. All rights reserved.