训练神经网络进行Word嵌入

问题描述 投票:0回答:1

Attached是实体的链接文件。我想训练神经网络将每个实体表示为一个向量。附件是我的培训代码

import pandas as pd
import numpy as np

from numpy import array
from keras.preprocessing.text import one_hot

from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.models import Model
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Input


from keras.layers.embeddings import Embedding
from sklearn.model_selection import train_test_split 

file_path = '/content/drive/My Drive/Colab Notebooks/Deep Learning/NLP/Data/entities.txt'
df = pd.read_csv(file_path, delimiter = '\t', engine='python', quoting = 3, header = None)
df.columns = ['Entity']
Entity = df['Entity']

X_train, X_test = train_test_split(Entity, test_size = 0.10)
print('Total Entities: {}'.format(len(Entity)))
print('Training Entities: {}'.format(len(X_train)))
print('Test Entities: {}'.format(len(X_test)))
vocab_size = len(Entity)
X_train_encode = [one_hot(d, vocab_size,lower=True, split=' ') for d in X_train]
X_test_encode = [one_hot(d, vocab_size,lower=True, split=' ') for d in X_test]
model = Sequential()
model.add(Embedding(input_length=1,input_dim=vocab_size, output_dim=100))
model.add(Flatten())
model.add(Dense(vocab_size, activation='softmax'))

model.compile(optimizer='adam', loss='mse', metrics=['acc'])
print(model.summary())

model.fit(X_train_encode, X_train_encode, epochs=20, batch_size=1000, verbose=1)

我尝试执行代码时遇到以下错误。

Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 34826 arrays:
keras deep-learning nlp word-embedding
1个回答
0
投票

您正在为model.fit传递numpy数组列表。以下代码生成x_train_encode和X_test_encode的数组列表。

X_train_encode = [one_hot(d, vocab_size,lower=True, split=' ') for d in X_train]
X_test_encode = [one_hot(d, vocab_size,lower=True, split=' ') for d in X_test]

传递给model.fit方法时,将这些列表更改为numpy数组。

X_train_encode = np.array(X_train_encode)
X_test_encode = np.array(X_test_encode)

而且我没有看到需要对X_train和X_test进行one_hot编码,嵌入层需要整数(在您的情况下是单词索引)而不是单词'索引的热编码值。因此,如果X_train和X_test是单词索引的数组,那么您可以直接将其提供给model.fit方法。

编辑:

目前正在使用'mse'损失。由于最后一层是softmax层,因此交叉熵损失更适用于此。并且输出是类的一个整数值(单词),稀疏分类应该用于丢失。

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])
© www.soinside.com 2019 - 2024. All rights reserved.