在keras中的word2vec训练期间无效的参数错误尽管vocab的大小是index+1
请参见下面的网络体系结构摘要:
__________________________________________________________________________________________________图层(类型)输出形状参数#连接到================================================== ===============================================input_11(InputLayer)(无,1)0__________________________________________________________________________________________________input_12(InputLayer)(无,1)0__________________________________________________________________________________________________embedding_11(嵌入)(无,1,300)1138500 input_11 [0] [0]__________________________________________________________________________________________________embedding_12(嵌入)(无,1,300)1138500 input_12 [0] [0]__________________________________________________________________________________________________dot_6(点)(无,1、1)0 embedding_11 [0] [0]embedding_12 [0] [0]__________________________________________________________________________________________________reshape_6(Reshape)(None,1)0 dot_6 [0] [0]__________________________________________________________________________________________________activation_5(激活)(无,1)0 reshape_6 [0] [0]================================================== ===============================================参数总计:2,277,000可训练的参数:2,277,000不可训练的参数:0
这是代码的一部分:
n_epochs=5
for epoch in range(n_epochs):
loss=0.
for i ,doc in enumerate(X_train_tokens):
data,labels=skipgrams(sequence=doc,vocabulary_size=vocab_size,window_size=4)
x=[np.array(x) for x in zip(*data)]
y=np.array(labels,dtype=np.int32)
if x:
loss +=model.train_on_batch(x,y)
print('Epoch:',epoch,'\t loss:',loss)
出现以下错误
从内存中删除基础状态对象,否则它将保留活着,因为从追溯中可以引用状态由于InvalidArgumentError:indexs [7,0] = 3795不在[0,3795)
我通过此链接解决了问题InvalidArgumentError: indices[26,0] = 5001 is not in [0, 5001]
让我知道它对您是否有用。