我想在训练之前加载训练前的单词嵌入,而不是每个train_steps加载它。我按照这个post的步骤。但它会显示错误:
您必须使用dtype float和shape [2000002,300]为占位符张量'word_embedding_placeholder'提供值
这是大致的代码:
embeddings_var = tf.Variable(tf.random_uniform([vocabulary_size, embedding_dim], -1.0, 1.0), trainable=False)
embedding_placeholder = tf.placeholder(tf.float32, [vocabulary_size, embedding_dim], name='word_embedding_placeholder')
embedding_init = embeddings_var.assign(embedding_placeholder) # assign exist word embeddings
batch_embedded = tf.nn.embedding_lookup(embedding_init, batch_ph)
sess = tf.Session()
train_steps = round(len(X_train) / BATCH_SIZE)
train_iterator, train_next_element = get_dataset_iterator(X_train, y_train, BATCH_SIZE, training_epochs)
sess.run(init_g)
sess.run(train_iterator.initializer)
_ = sess.run(embedding_init, feed_dict={embedding_placeholder: w2v})
for epoch in range(0, training_epochs):
# Training steps
for i in range(train_steps):
X_train_input, y_train_input = sess.run(train_next_element)
seq_len = np.array([list(word_idx).index(PADDING_INDEX) if PADDING_INDEX in word_idx else len(word_idx) for word_idx in X_train_input]) # actual lengths of sequences
train_loss, train_acc, _ = sess.run([loss, accuracy, optimizer],
feed_dict={batch_ph: X_train_input,
target_ph: y_train_input,
seq_len_ph: seq_len,
keep_prob_ph: KEEP_PROB})
当我将培训中的feed_dict更改为:
train_loss, train_acc, _ = sess.run([loss, accuracy, optimizer],
feed_dict={batch_ph: X_train_input,
target_ph: y_train_input,
seq_len_ph: seq_len,
keep_prob_ph: KEEP_PROB,
embedding_placeholder: w2v})
它有效,但它并不优雅。有没有人遇到这个问题?
目标:我想在训练前加载一次列车前嵌入。而不是每次都重新计算embedding_init。
大概你在网络的某个地方使用batch_embedded,这意味着它会在你的损失中使用。这意味着每当你在循环中执行sess.run时,你就会重新计算batch_embedded,从而重新计算embedding_init,你需要embedding_placeholder。相反,您可以按如下方式初始化变量:
embeddings_var = tf.get_variable("embeddings_var", shape=[vocabulary_size, embedding_dim], initializer=tf.constant_initializer(w2v), trainable=False)