为什么将真实和预测图像的直方图的MSE包含在损失函数中会导致角膜形成误差?

问题描述 投票:0回答:1

我在失去一个keras模型时遇到了一个问题,我真的不知道它可能来自哪里。综上所述,如果我在损失中加上以下术语:

k.sum(k.square(hist(y_true)-hist(y_pred)))

然后我得到错误:“输入arr必须为非负数!”

keras版本是2.2.4,TensorFlow版本是1.15.0。我正在使用Python 3.6.9keras模型看起来像这样:


层(类型)输出形状参数#

input_1(InputLayer)(无,256,256,8)0


down_conv_1(Conv2D)(无,256,256,16)4624


down_pool_1(MaxPooling2D)(无,64,64,16)0


down_conv_2(Conv2D)(无,64,64,32)73760


down_pool_2(MaxPooling2D)(无,16,16,16,32)0


down_conv_3(Conv2D)(无,16,16,64)73792


down_pool_3(MaxPooling2D)(无,4,4,4,64)0


down_conv_4(Conv2D)(无,4,4,128)8320


代码(MaxPooling2D)(无,1,1,128)0


up_conv_1(Conv2DTranspose)(None,2,2,1024)18875392


up_batchnorm_1(BatchNormali(无,2,2,1024)4096


up_conv_2(Conv2DTranspose)(None,4,4,512)75497984


up_batchnorm_2(BatchNormali(None,4,4,4,512)2048


up_conv_3(Conv2DTranspose)(无,8,8,256)4718848


up_batchnorm_3(BatchNormali(None,8,8,256)1024] >>


up_conv_4(Conv2DTranspose)(None,16,16,128)1179776


up_batchnorm_4(BatchNormali(None,16,16,128)512


up_conv_5(Conv2DTranspose)(无,32,32,64)73792


up_batchnorm_5(BatchNormali(无,32,32,64)256


up_conv_6(Conv2DTranspose)(无,64,64,32)18464


up_batchnorm_6(BatchNormali(None,64,64,32)128] >>


up_conv_7(Conv2DTranspose)(None,128,128,16)4624


up_batchnorm_7(BatchNormali(None,128,128,16)64] >>


up_conv_8(Conv2DTranspose)(无,256,256,8)1160


up_batchnorm_8(BatchNormali(None,256,256,8)32

总参数:100,538,696可训练的参数:100,534,616不可训练的参数:4,080


我将其配置如下:

custom_adam = optimizers.Adam(0.25, beta_1=0.9, beta_2=0.999, amsgrad=False)
autoencoder.compile(optimizer=custom_adam, loss=cl.my_loss, metrics = ['accuracy'])

其中cl.my_loss在另一个文件中描述为:


from keras import backend as k
from keras import regularizers, optimizers
from keras.layers import Input, BatchNormalization, UpSampling2D, Dense, Flatten, Reshape
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.models import Model
from keras.losses import mean_squared_error as MSE


from tensorflow.spectral import rfft2d    
import tensorflow as tf
import tensorflow_probability as tfp

def hist(tensor):  
    """  
    Calculates the histogram of the image tensor  
    :param tensor: image tensor
    :return: histogram tensor with as many channels as the image. Each channel   
    is a 1D vector with the same size as the edges vector  
    """  
    edges = tf.range(0., 1000., 50.)  
    return tfp.stats.histogram(tensor, edges, axis=[0, 1], extend_upper_interval=True)  

def MSE_hist(y_true, y_pred):  
    """  
    mean squared error btw the histograms of the true and the predicted images
    """  
    return MSE(hist(y_true), hist(y_pred))  

def fft_bw_image(img):
    """
    Calculates the 2D Fourier Transform of a one channel image tensor
    :param img: 2D tensor
    """
    f = tf.spectral.rfft2d(img)
    f_abs = tf.math.abs(f)
    split0, split1 = tf.split(f_abs, [1, 1], axis=0)
    return split0

def my_loss(y_true, y_pred):
    """
    Define a custom loss function with a MSE term, a Fourier transform term a
    histogram term
    :param y_true: ground truth
    :param y_pred: predicted tensor
    """
    mse = MSE(y_true, y_pred)
    fft = MSE_fft(y_true, y_pred)
    hist = MSE_hist(y_true, y_pred)

    #Calculate orders of magnitude
    o_hist = k.max(k.max(y_true))*256*256*8
    o_MSE  = k.mean(y_true)
    o_fft  = k.max(tf.math.abs(rfft2d(y_true)))

    return mse/o_MSE + 0.3*fft/o_fft + 1e-15*hist/o_hist 

history = model.fit(np.array(data), np.array(data), epochs=5000)的完整输出,其中数据是4 256x256x8张图像的列表:

ncount_op.cc:111:无效的参数:输入arr必须为非负数!2020-01-07 19:57:37.436533:W tensorflow / core / framework / op_kernel.cc:1651] OP_REQUIRES在bincount_op.cc:111处失败:参数无效:输入arr必须为非负数!2020-01-07 19:57:37.436605:W tensorflow / core / framework / op_kernel.cc:1651] OP_REQUIRES在bincount_op.cc:111失败:参数无效:输入arr必须为非负数!追溯(最近一次通话):在第30行的文件“ main.py”中历史= model.fit(np.array(data),np.array(data),历元= 5000)文件“ /opt/anaconda3/envs/ML/lib/python3.6/site-packages/keras/engine/training.py”,行1039,适合validate_steps = validation_steps)在fit_loop中的文件199行中的文件“ /opt/anaconda3/envs/ML/lib/python3.6/site-packages/keras/engine/training_arrays.py”outs = f(ins_batch)文件“ /opt/anaconda3/envs/ML/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py”,行2715,位于[[call

返回self._call(输入)_call中的文件“ /opt/anaconda3/envs/ML/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py”,行2675获取= self._callable_fn(* array_vals)在call
中的文件“ /opt/anaconda3/envs/ML/lib/python3.6/site-packages/tensorflow_core/python/client/session.py”,第1472行run_metadata_ptr)tensorflow.python.framework.errors_impl.InvalidArgumentError:输入arr必须为非负数![[{{node loss / up_batchnorm_8_loss / histogram / count_integers / map / while / bincount / Bincount}}]]在此先,非常感谢您为您的评论提供的帮助

我在失去一个keras模型时遇到了一个问题,我真的不知道它可能来自哪里。总结一下,如果我在损失中添加以下术语:k.sum(k.square(hist(y_true)-hist(y_pred))...

首先,我们应该确保梯度很好地流过直方图。
python tensorflow keras deep-learning loss
1个回答
0
投票
首先,我们应该确保梯度很好地流过直方图。
© www.soinside.com 2019 - 2024. All rights reserved.