使用GradientTape.gradient进行TypeError计算渐变

问题描述 投票:2回答:1

你好,

我目前正在尝试计算Tensorflow 1.13.1中的渐变并使用GradientTape中解释的official documentation类,但我得到了TypeError: Fetch argument None has invalid type <class 'NoneType'>。 下面,我将包括两个简单的情况,我得到这个错误,只使用开箱即用的Tensorflow功能,第一个是更简单的最小工作示例,第二个是我实际需要解决/获得工作-周围。为了完整起见,我使用的是Python 3.6.8。

更简单一个

import tensorflow as tf

tf.reset_default_graph()
x = tf.constant([1., 2., 3.])
with tf.GradientTape(persistent=True) as gg:
    gg.watch(x)
    f1 = tf.map_fn(lambda a: a**2, x)
    f2 = x*x

# Computes gradients
d_fx1 = gg.gradient(f1, x)     #Line that causes the error
d_fx2 = gg.gradient(f2, x)     #No error
del gg #delete persistent GradientTape

with tf.Session() as sess:
    d1, d2 = sess.run((d_fx1, d_fx2))
print(d1, d2)

在此代码中,f1f2以不同的方式计算,但给出相同的数组。但是,当试图计算与它们相关的梯度时,第一行给出以下错误,而第二行无瑕疵地工作。我在下面报告错误的堆栈跟踪

TypeError                                 Traceback (most recent call last)
<ipython-input-1-9c59a2cf2d9b> in <module>()
     15 
     16 with tf.Session() as sess:
---> 17     d1, d2 = sess.run((d_fx1, d_fx2))
     18 print(d1, d2)

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    927     try:
    928       result = self._run(None, fetches, feed_dict, options_ptr,
--> 929                          run_metadata_ptr)
    930       if run_metadata:
    931         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1135     # Create a fetch handler to take care of the structure of fetches.
   1136     fetch_handler = _FetchHandler(
-> 1137         self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
   1138 
   1139     # Run request and get response.

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
    469     """
    470     with graph.as_default():
--> 471       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
    472     self._fetches = []
    473     self._targets = []

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
    259     elif isinstance(fetch, (list, tuple)):
    260       # NOTE(touts): This is also the code path for namedtuples.
--> 261       return _ListFetchMapper(fetch)
    262     elif isinstance(fetch, collections.Mapping):
    263       return _DictFetchMapper(fetch)

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in __init__(self, fetches)
    368     """
    369     self._fetch_type = type(fetches)
--> 370     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
    371     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
    372 

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in <listcomp>(.0)
    368     """
    369     self._fetch_type = type(fetches)
--> 370     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
    371     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
    372 

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
    256     if fetch is None:
    257       raise TypeError('Fetch argument %r has invalid type %r' % (fetch,
--> 258                                                                  type(fetch)))
    259     elif isinstance(fetch, (list, tuple)):
    260       # NOTE(touts): This is also the code path for namedtuples.

TypeError: Fetch argument None has invalid type <class 'NoneType'>

请注意,我也尝试过一次只计算一个渐变,即使用persistent=False,并得到相同的结果。

实际需要

下面,我将包括最小的工作示例,以重现我得到的相同错误,但尝试解决我实际上正在处理的问题。

在这段代码中,我使用RNN计算输出w.r.t的一些输入,我需要计算输出w.r.t输入的jacobian

import tensorflow as tf
from tensorflow.keras.layers import RNN, GRUCell

# Define size of variable. TODO: adapt to data
inp_dim = 2
num_units = 50
batch_size = 100
timesteps = 10

# Reset the graph, so as to avoid errors
tf.reset_default_graph()

# Building the model
inputs = tf.ones(shape=(timesteps, batch_size, inp_dim))

# Follow gradient computations
with tf.GradientTape() as g:
    g.watch(inputs)
    cells = [GRUCell(num_units), GRUCell(num_units)]
    rnn = RNN(cells, time_major=True, return_sequences=True)
    f = rnn(inputs)
d_fx = g.batch_jacobian(f, inputs)

# Run graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    grads = sess.run(d_fx)
grads.shape

关于堆栈跟踪,我得到相同的错误,但线路较少(在此堆栈跟踪中有一个for_fetch<listcomp>__init较少)。为了完整起见,我仍然将其包含在下面

TypeError                                 Traceback (most recent call last)
<ipython-input-5-bb2ce4eebe87> in <module>()
     25 with tf.Session() as sess:
     26     sess.run(tf.global_variables_initializer())
---> 27     grads = sess.run(d_fx)
     28 grads.shape

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    927     try:
    928       result = self._run(None, fetches, feed_dict, options_ptr,
--> 929                          run_metadata_ptr)
    930       if run_metadata:
    931         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1135     # Create a fetch handler to take care of the structure of fetches.
   1136     fetch_handler = _FetchHandler(
-> 1137         self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
   1138 
   1139     # Run request and get response.

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in __init__(self, graph, fetches, feeds, feed_handles)
    469     """
    470     with graph.as_default():
--> 471       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
    472     self._fetches = []
    473     self._targets = []

C:\HOMEWARE\Miniconda3-Windows-x86_64\envs\rdwsenv\lib\site-packages\tensorflow\python\client\session.py in for_fetch(fetch)
    256     if fetch is None:
    257       raise TypeError('Fetch argument %r has invalid type %r' % (fetch,
--> 258                                                                  type(fetch)))
    259     elif isinstance(fetch, (list, tuple)):
    260       # NOTE(touts): This is also the code path for namedtuples.

TypeError: Fetch argument None has invalid type <class 'NoneType'>

我觉得有一些Tensorflow功能的错误让我错误,但我不确定。最后,我感兴趣的是得到一个tensor,其中包含我的网络输出的jacobian w.r.t到输入。如何使用其他工具或更正我的代码来实现这一目标?

编辑:好的,所以我考虑了danyfang的评论,并试图调查在Github上提出的问题,他引用了tf.gradients返回None而不是0,因为在低级Tensorflow中有一些实现设计。

因此,我尝试通过计算0来创建一个简单的情况,我确信渐变与tf.matmul(tf.transpose(x), x)不同。我发布在MWE下面。

import tensorflow as tf

tf.reset_default_graph()
x = tf.constant([[1., 2., 3.]])
with tf.GradientTape(persistent=True) as gg:
    gg.watch(x)
    y = tf.matmul(x, tf.transpose(x))
    f1 = tf.map_fn(lambda a: a, y)

# Computes gradients
d_fx1 = gg.gradient(f1, x)
d_yx = gg.gradient(y, x)
del gg #delete persistent GradientTape

with tf.Session() as sess:
    #d1 = sess.run(d_fx1) # Same error None type
    d2 = sess.run(d_yx) #Works flawlessly. returns array([[2., 4., 6.]], dtype=float32)
d2

这显示(至少在我看来)错误的产生不是因为这个issue报告的行为,而是另一个原因是由于较低级别的实现。

python tensorflow gradient recurrent-neural-network hessian
1个回答
0
投票

编辑:下面,我报告我如何计算输出w.r.t输出的tf.hessians

我成功地使用函数tf.gradients计算渐变。但是,根据文档,此函数使用符号派生,而GradientTape.gradient使用自动区分。在我正在阅读的论文中,他们讨论了自动区分,所以我不知道以后我是否会遇到一些问题,但至少我的代码运行了。

下面,我发布了一个MWE,其中包含我已使用的RNN代码。

import tensorflow as tf
from tensorflow.keras.layers import RNN, GRUCell, Dense

# Define size of variable. TODO: adapt to data
inp_dim = 2
num_units = 50
batch_size = 100
timesteps = 10

# Reset the graph, so as to avoid errors
tf.reset_default_graph()

inputs = tf.ones(shape=(timesteps, batch_size, inp_dim))

### Building the model
cells = [GRUCell(num_units), GRUCell(num_units)]
rnn = RNN(cells, time_major=True, return_sequences=True)
final_layer = Dense(1, input_shape=(num_units,))

# Apply to inputs
last_state = rnn(inputs)
f = final_layer(last_state)

[derivs] = tf.gradients(f, inputs)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    grads = sess.run(derivs)

只是为了警告任何想要计算二阶导数的感兴趣的旁观者,不支持使用tf.gradients(tf.gradients(func, vars))。还有一个名为tf.hessian的函数,但是在上面的代码中用tf.gradients替换tf.hessian不起作用,导致错误很长,以至于我不会在这里包含它。我很可能会在Github上做一个问题,我会在这里链接任何感兴趣的人。目前,当我遇到一个令人不满意的解决方法时,我会将自己的回答标记为解决我的问题。

Computing second-order derivatives

在Github上看到这个issue

© www.soinside.com 2019 - 2024. All rights reserved.