带有Tensorflow 2.0中用于学习转移的反向支持的Gradcam

问题描述 投票:2回答:2

我在TF 2.0中使用梯度可视化和转移学习时出错。梯度可视化在不使用转移学习的模型上工作。

运行代码时出现错误:

    assert str(id(x)) in tensor_dict, 'Could not compute output ' + str(x)
AssertionError: Could not compute output Tensor("block5_conv3/Identity:0", shape=(None, 14, 14, 512), dtype=float32)

当我运行下面的代码时发生错误。我认为命名约定或将基本模型vgg16的输入和输出连接到我要添加的层时存在问题。非常感谢您的帮助!

"""
Broken example when grad_model is created. 
"""
!pip uninstall tensorflow
!pip install tensorflow==2.0.0
import cv2
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import matplotlib.pyplot as plt

IMAGE_PATH = '/content/cat.3.jpg'
LAYER_NAME = 'block5_conv3'
model_layer = 'vgg16'
CAT_CLASS_INDEX = 281

imsize = (224,224,3)

img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))
plt.figure()
plt.imshow(img)
img = tf.io.read_file(IMAGE_PATH)
img = tf.image.decode_jpeg(img)
img = tf.cast(img, dtype=tf.float32)
# img = tf.keras.preprocessing.image.img_to_array(img)
img = tf.image.resize(img, (224,224))
img = tf.reshape(img, (1, 224,224,3))

input = layers.Input(shape=(imsize[0], imsize[1], imsize[2]))
base_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet',
                                          input_shape=(imsize[0], imsize[1], imsize[2]))
# base_model.trainable = False
flat = layers.Flatten()
dropped = layers.Dropout(0.5)
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()

fc1 = layers.Dense(16, activation='relu', name='dense_1')
fc2 = layers.Dense(16, activation='relu', name='dense_2')
fc3 = layers.Dense(128, activation='relu', name='dense_3')
prediction = layers.Dense(2, activation='softmax', name='output')
for layr in base_model.layers:
    if ('block5' in layr.name):

        layr.trainable = True
    else:
        layr.trainable = False

x = base_model(input)
x = global_average_layer(x)
x = fc1(x)
x = fc2(x)
x = prediction(x)

model = tf.keras.models.Model(inputs = input, outputs = x)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
                  loss='binary_crossentropy',
                  metrics=['accuracy'])

这部分代码就是错误所在。我不确定标记输入和输出的正确方法是什么。

# Create a graph that outputs target convolution and output
grad_model = tf.keras.models.Model(inputs = [model.input, model.get_layer(model_layer).input], 
                                   outputs=[model.get_layer(model_layer).get_layer(LAYER_NAME).output,
                                            model.output])

print(model.get_layer(model_layer).get_layer(LAYER_NAME).output)
# Get the score for target class

# Get the score for target class
with tf.GradientTape() as tape:
    conv_outputs, predictions = grad_model(img)
    loss = predictions[:, 1]

以下部分用于绘制gradcam的热图。

print('Prediction shape:', predictions.get_shape())
# Extract filters and gradients
output = conv_outputs[0]
grads = tape.gradient(loss, conv_outputs)[0]

# Apply guided backpropagation
gate_f = tf.cast(output > 0, 'float32')
gate_r = tf.cast(grads > 0, 'float32')
guided_grads = gate_f * gate_r * grads

# Average gradients spatially
weights = tf.reduce_mean(guided_grads, axis=(0, 1))

# Build a ponderated map of filters according to gradients importance
cam = np.ones(output.shape[0:2], dtype=np.float32)

for index, w in enumerate(weights):
    cam += w * output[:, :, index]

# Heatmap visualization
cam = cv2.resize(cam.numpy(), (224, 224))
cam = np.maximum(cam, 0)
heatmap = (cam - cam.min()) / (cam.max() - cam.min())

cam = cv2.applyColorMap(np.uint8(255 * heatmap), cv2.COLORMAP_JET)

output_image = cv2.addWeighted(cv2.cvtColor(img.astype('uint8'), cv2.COLOR_RGB2BGR), 0.5, cam, 1, 0)

plt.figure()
plt.imshow(output_image)
plt.show()

我也在https://github.com/tensorflow/tensorflow/issues/37680向github上的Tensorflow团队询问了此问题。

python gradient tensorflow2.0 backpropagation
2个回答
1
投票

我知道了。如果您设置模型以使用自己的层扩展vgg16基本模型,而不是将基本模型插入到新的模型(例如层)中,那么它将起作用。首先建立模型,并确保声明input_tensor。

inp = layers.Input(shape=(imsize[0], imsize[1], imsize[2]))
base_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_tensor=inp,
                                          input_shape=(imsize[0], imsize[1], imsize[2]))

这样,我们就不必包括x=base_model(inp)这样的行来显示我们要输入的输入。tf.keras.applications.VGG16(...)中已经包含了。

与其将vgg16基本模型放入另一个模型中,还不如通过在基本模型本身上添加层来进行gradcam。我抓取了VGG16的最后一层(除去顶部)的输出,即池化层。

block5_pool = base_model.get_layer('block5_pool')
x = global_average_layer(block5_pool.output)
x = fc1(x)
x = prediction(x)

model = tf.keras.models.Model(inputs = inp, outputs = x)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
                  loss='binary_crossentropy',
                  metrics=['accuracy'])

现在,我抓取图层进行可视化,LAYER_NAME='block5_conv3'

# Create a graph that outputs target convolution and output
grad_model = tf.keras.models.Model(inputs = [model.input], 
                                   outputs=[model.output, model.get_layer(LAYER_NAME).output])

print(model.get_layer(LAYER_NAME).output)
# Get the score for target class

# Get the score for target class
with tf.GradientTape() as tape:
    predictions, conv_outputs = grad_model(img)
    loss = predictions[:, 1]
print('Prediction shape:', predictions.get_shape())
# Extract filters and gradients
output = conv_outputs[0]
grads = tape.gradient(loss, conv_outputs)[0]

0
投票

[我们(我和许多开发项目的团队成员)在实现tutorial的实现Grad-CAM的代码中发现了类似的问题。

该代码不适用于包含VGG19基本模型以及在其之上添加一些额外层的模型。问题在于,将VGG19基本模型作为“层”插入了我们的模型中,并且显然GradCAM代码不知道如何处理-我们遇到了“图形断开...”错误。然后,在进行了一些调试(由另一个团队成员执行,而不是由我执行)之后,我们设法修改了原始代码,以使其适用于其中包含另一个模型的这种模型。想法是将内部模型添加为GradCAM类的额外参数。因为这可能对其他人有帮助,所以我在下面添加了修改后的代码(我们还将GradCAM类重命名为My_GradCAM)。

class My_GradCAM:
    def __init__(self, model, classIdx, inner_model=None, layerName=None):
        self.model = model
        self.classIdx = classIdx
        self.inner_model = inner_model
        if self.inner_model == None:
            self.inner_model = model
        self.layerName = layerName 

[...]

        gradModel = tensorflow.keras.models.Model(inputs=[self.inner_model.inputs],
                  outputs=[self.inner_model.get_layer(self.layerName).output,
                  self.inner_model.output])                                   

然后可以通过添加内部模型作为额外参数来实例化该类,例如:

cam = My_GradCAM(model, None, inner_model=model.get_layer("vgg19"), layerName="block5_pool")

我希望这会有所帮助。

Edit:归功于Mirtha Lucas进行调试和找到解决方案。

© www.soinside.com 2019 - 2024. All rights reserved.