如何在张量流的第一个时期之后解决错误?

问题描述 投票:1回答:1

Python 3.7.3Tensorflow 2.0.0-alpha0我正在尝试在tensorflow中使用imdb分类器,我坚持使用https://www.coursera.org/learn/natural-language-processing-tensorflow/lecture/Q1Ln5/notebook-for-lesson-1中的代码。

但是在第一个时期之后出现以下错误。

Train on 25000 samples, validate on 25000 samples
Epoch 1/10
24256/25000 [============================>.] - ETA: 0s - loss: 0.4815 - accuracy: 0.7535Traceback (most recent call last):
  File "tf2.py", line 78, in <module>
    model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 873, in fit
    steps_name='steps_per_epoch')
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 398, in model_iteration
    steps_name='validation_steps')
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 352, in model_iteration
    batch_outs = f(ins_batch)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3211, in __call__
    value = ops.convert_to_tensor(value, dtype=tensor.dtype)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1050, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1108, in convert_to_tensor_v2
    as_ref=False)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1186, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 304, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 245, in constant
    allow_broadcast=True)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 253, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 114, in convert_to_eager_tensor
    return ops.EagerTensor(value, handle, device, dtype)
TypeError: float() argument must be a string or a number, not 'method'

这是我的代码:

import tensorflow as tf
print(tf.__version__)

tf.compat.v1.enable_eager_execution()

import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews", with_info = True, as_supervised = True)

import numpy as np
train_data, test_data = imdb['train'], imdb['test']
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = [] 

for s, l in train_data:
    training_sentences.append(str(s.numpy()))
    training_labels.append(l.numpy())

for s, l in test_data:
    testing_sentences.append(str(s.numpy()))
    testing_labels.append(l.numpy)

training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)

vocab_size = 10000
embedding_dim =16
max_length = 120
trunc_type = 'post'
oov_tok = "<OOV>"

from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

tokenizer = Tokenizer(num_words = vocab_size, oov_token = oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index #key-value dictionary on training_sentences

sequences = tokenizer.texts_to_sequences(training_sentences)

padded = pad_sequences(sequences, maxlen = max_length, truncating = trunc_type)

testing_sequences = tokenizer.texts_to_sequences(testing_sentences)

testing_padded = pad_sequences(testing_sequences, maxlen = max_length)

reverse_word_index = dict([(value, key) for (key,value) in word_index.items()])
def decode_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])

print(decode_review(padded[0]))
print(training_sentences[0])

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(6, activation='relu'),
    tf.keras.layers.Dense(1,activation = 'sigmoid'),
])

model.compile(loss="binary_crossentropy", optimizer='adam', metrics=['accuracy'])
model.summary

num_epochs = 10
model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))

如何解决该错误?

python tensorflow machine-learning keras python-3.7
1个回答
0
投票

即使在注释部分(感谢giser_yugang)中也存在此解决方案(答案部分),为社区造福。

从以下位置修改代码时,问题已解决

for s, l in test_data:
    testing_sentences.append(str(s.numpy()))
    testing_labels.append(l.numpy)

to

for s, l in test_data:
    testing_sentences.append(str(s.numpy()))
    testing_labels.append(l.numpy())

完整的工作代码如下

import tensorflow as tf
print(tf.__version__)

tf.compat.v1.enable_eager_execution()

import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews", with_info = True, as_supervised = True)

import numpy as np
train_data, test_data = imdb['train'], imdb['test']
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = [] 

for s, l in train_data:
    training_sentences.append(str(s.numpy()))
    training_labels.append(l.numpy())

for s, l in test_data:
    testing_sentences.append(str(s.numpy()))
    testing_labels.append(l.numpy())

training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)

vocab_size = 10000
embedding_dim =16
max_length = 120
trunc_type = 'post'
oov_tok = "<OOV>"

from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

tokenizer = Tokenizer(num_words = vocab_size, oov_token = oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index #key-value dictionary on training_sentences

sequences = tokenizer.texts_to_sequences(training_sentences)

padded = pad_sequences(sequences, maxlen = max_length, truncating = trunc_type)

testing_sequences = tokenizer.texts_to_sequences(testing_sentences)

testing_padded = pad_sequences(testing_sequences, maxlen = max_length)

reverse_word_index = dict([(value, key) for (key,value) in word_index.items()])
def decode_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])

print(decode_review(padded[0]))
print(training_sentences[0])

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(6, activation='relu'),
    tf.keras.layers.Dense(1,activation = 'sigmoid'),
])

model.compile(loss="binary_crossentropy", optimizer='adam', metrics=['accuracy'])
model.summary

num_epochs = 10
model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))

输出:

2.2.0

? ? b this was an absolutely terrible movie don't be <OOV> in by christopher walken or michael <OOV> both are great actors but this must simply be their worst role in history even their great acting could not redeem this movie's ridiculous storyline this movie is an early nineties us propaganda piece the most pathetic scenes were those when the <OOV> rebels were making their cases for <OOV> maria <OOV> <OOV> appeared phony and her pseudo love affair with walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning i am disappointed that there are movies like this ruining <OOV> like christopher <OOV> good name i could barely sit through it
b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it."

Epoch 1/10
782/782 [==============================] - 5s 6ms/step - loss: 0.4912 - accuracy: 0.7446 - val_loss: 0.3491 - val_accuracy: 0.8471
Epoch 2/10
782/782 [==============================] - 5s 6ms/step - loss: 0.2353 - accuracy: 0.9127 - val_loss: 0.3714 - val_accuracy: 0.8382
Epoch 3/10
782/782 [==============================] - 5s 6ms/step - loss: 0.0896 - accuracy: 0.9772 - val_loss: 0.4480 - val_accuracy: 0.8261
Epoch 4/10
782/782 [==============================] - 5s 6ms/step - loss: 0.0226 - accuracy: 0.9970 - val_loss: 0.5488 - val_accuracy: 0.8219
Epoch 5/10
782/782 [==============================] - 5s 6ms/step - loss: 0.0057 - accuracy: 0.9996 - val_loss: 0.5993 - val_accuracy: 0.8240
Epoch 6/10
782/782 [==============================] - 5s 6ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.6491 - val_accuracy: 0.8255
Epoch 7/10
782/782 [==============================] - 5s 7ms/step - loss: 8.2380e-04 - accuracy: 1.0000 - val_loss: 0.6869 - val_accuracy: 0.8262
Epoch 8/10
782/782 [==============================] - 5s 6ms/step - loss: 4.7165e-04 - accuracy: 1.0000 - val_loss: 0.7288 - val_accuracy: 0.8264
Epoch 9/10
782/782 [==============================] - 5s 6ms/step - loss: 2.6724e-04 - accuracy: 1.0000 - val_loss: 0.7653 - val_accuracy: 0.8261
Epoch 10/10
782/782 [==============================] - 5s 6ms/step - loss: 1.5851e-04 - accuracy: 1.0000 - val_loss: 0.8009 - val_accuracy: 0.8263
© www.soinside.com 2019 - 2024. All rights reserved.