Keras LSTM更大的功能压倒性更小?

问题描述 投票:1回答:1

我有这种怀疑是最长的时间,但无法弄清楚是否的情况,所以这是场景:

我正在尝试构建一个具有3个不同输入的3个功能的模型:

  1. 文本序列
  2. 一个浮子
  3. 一个浮子

现在所有这三个都构成了一个步骤。但由于我使用手套来使用100维度对文本序列进行矢量化,因此20字的文本序列的长度最终为2000.因此,每步的总输入长度为2002(每个步长为一个矩阵形状(1,2002年)被喂入,其中2000个来自一个特征。

文本序列是否压倒了两个浮点数,因此无论浮点数的值是什么,它与预测无关?如果是这样,我该怎么做才能解决这个问题?也许手动衡量每个功能应该使用多少?代码附后

def build_model(embedding_matrix) -> Model:
    text = Input(shape=(9, news_text.shape[1]), name='text')
    price = Input(shape=(9, 1), name='price')
    volume = Input(shape=(9, 1), name='volume')

    text_layer = Embedding(
        embedding_matrix.shape[0],
        embedding_matrix.shape[1],
        weights=[embedding_matrix]
    )(text)
    text_layer = Dropout(0.2)(text_layer)
    # Flatten the vectorized text matrix
    text_layer = Reshape((9, int_shape(text_layer)[2] * int_shape(text_layer)[3]))(text_layer)

    inputs = concatenate([
        text_layer,
        price,
        volume
    ])

    output = Convolution1D(128, 5, activation='relu')(inputs)
    output = MaxPool1D(pool_size=4)(output)
    output = LSTM(units=128, dropout=0.2, return_sequences=True)(output)
    output = LSTM(units=128, dropout=0.2, return_sequences=True)(output)
    output = LSTM(units=128, dropout=0.2)(output)
    output = Dense(units=2, activation='linear', name='output')(output)

    model = Model(
        inputs=[text, price, volume],
        outputs=[output]
   )

    model.compile(optimizer='adam', loss='mean_squared_error')

    return model

编辑:请注意,输入到lstm的形状是(?,9,2002),这意味着现在来自文本的2000被视为2000个独立的功能

python keras deep-learning lstm recurrent-neural-network
1个回答
1
投票

正如我在评论中提到的,一种方法是使用两个分支模型,其中一个分支处理文本数据,另一个处理两个浮动特征。最后,两个分支的输出合并在一起:

# Branch one: process text data
text_input = Input(shape=(news_text.shape[1],), name='text')

text_emb = Embedding(embedding_matrix.shape[0],embedding_matrix.shape[1],
                weights=[embedding_matrix])(text_input)

# you may alternatively use only Conv1D + MaxPool1D or
# stack multiple LSTM layers on top of each other or
# use a combination of Conv1D, MaxPool1D and LSTM
text_conv = Convolution1D(128, 5, activation='relu')(text_emb)
text_lstm = LSTM(units=128, dropout=0.2)(text_conv)

# Branch two: process float features
price_input = Input(shape=(9, 1), name='price')
volume_input = Input(shape=(9, 1), name='volume')

pv = concatenate([price_input, volume_input])

# you can also stack multiple LSTM layers on top of each other
pv_lstm = LSTM(units=128, dropout=0.2)(pv)

# merge output of branches
text_pv = concatenate([text_lstm, pv_lstm])

output = Dense(units=2, activation='linear', name='output')(text_pv)

model = Model(
    inputs=[text_input, price_input, volume_input],
    outputs=[output]
)
model.compile(optimizer='adam', loss='mean_squared_error')

正如我在代码中评论的那样,这只是一个简单的例子。您可能需要进一步添加或删除图层或正则化并调整超参数。

© www.soinside.com 2019 - 2024. All rights reserved.