具有可训练标量的自定义Keras图层

问题描述 投票:2回答:1

我正在(试图)编写一个自定义的Keras层,该层实现以下各个组件:

x-> a x + b ReLU(x)

具有a和b可训练的重量。到目前为止,这是我尝试过的操作:

class Custom_ReLU(tf.keras.layers.Layer):

    def __init__(self, units=d):
        super(Custom_ReLU, self).__init__()
        self.units = units

    def build(selinputinput_shape):
        self.a1 = self.add_weight(shape=[1],
                                initializer = 'random_uniform',
                                trainable=True)
        self.a2 = self.add_weight(shape=[1],
                                initializer = 'random_uniform',
                                trainable=True)

    def call(self,inputs):
        return self.a1*inputs + self.a2*(tf.nn.relu(inputs))

但是,出现错误。我认为问题在于,我不知道如何定义可训练的“标量” ...我是否正确认为这以及如何做到这一点?

编辑/添加:

这是我尝试用ReLU替换为“ Custom_ReLU”来构建我的普通前馈体系结构的方法:

# Build Vanilla Network
inputs_ffNN = tf.keras.Input(shape=(d,))
x_ffNN = fullyConnected_Dense(d)(inputs_ffNN)
for i in range(Depth):
    x_HTC = Custom_ReLU(x_ffNN)
    x_ffNN = fullyConnected_Dense(d)(x_ffNN)
outputs_ffNN = fullyConnected_Dense(D)(x_ffNN)
ffNN = tf.keras.Model(inputs_ffNN, outputs_ffNN)

这是错误的摘要:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-27-8bf6fc4ae89d> in <module>
      7     #x_HTC = tf.nn.relu(x_HTC)
      8     x_HTC = BounceLU(x_HTC)
----> 9     x_HTC = HTC(d)(x_HTC)
     10 outputs_HTC = HTC(D)(x_HTC)
     11 ffNN_HTC = tf.keras.Model(inputs_HTC, outputs_HTC)

~/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    816         # Eager execution on data tensors.
    817         with backend.name_scope(self._name_scope()):
--> 818           self._maybe_build(inputs)
    819           cast_inputs = self._maybe_cast_inputs(inputs)
    820           with base_layer_utils.autocast_context_manager(

~/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
   2114         # operations.
   2115         with tf_utils.maybe_init_scope(self):
-> 2116           self.build(input_shapes)
   2117       # We must set self.built since user defined build functions are not
   2118       # constrained to set self.built.

<ipython-input-5-21623825ed35> in build(self, input_shape)
      5 
      6     def build(self, input_shape):
----> 7         self.w = self.add_weight(shape=(input_shape[-1], self.units),
      8                                initializer='random_normal',
      9                                trainable=False)

TypeError: 'NoneType' object is not subscriptable
python tensorflow keras neural-network layer
1个回答
0
投票

我在使用您的图层时没有问题:

class Custom_ReLU(tf.keras.layers.Layer):

    def __init__(self):
        super(Custom_ReLU, self).__init__()

        self.a1 = self.add_weight(shape=[1],
                                initializer = 'random_uniform',
                                trainable=True)
        self.a2 = self.add_weight(shape=[1],
                                initializer = 'random_uniform',
                                trainable=True)

    def call(self,inputs):
        return self.a1*inputs + self.a2*(tf.nn.relu(inputs))

用法:

d = 5
inputs_ffNN = tf.keras.Input(shape=(d,))
x_ffNN = tf.keras.layers.Dense(10)(inputs_ffNN)
x_HTC = Custom_ReLU()(x_ffNN)
outputs_ffNN = tf.keras.layers.Dense(1)(x_HTC)

ffNN = tf.keras.Model(inputs_ffNN, outputs_ffNN)
ffNN.compile('adam', 'mse')

ffNN.fit(np.random.uniform(0,1, (10,5)), np.random.uniform(0,1, 10), epochs=10)

这里是完整示例:https://colab.research.google.com/drive/1n4jIsY3qEDvtobofQaUPO3ysUW9bQWjs?usp=sharing

© www.soinside.com 2019 - 2024. All rights reserved.