将tf.dataset转换为4D张量

问题描述 投票:0回答:2

我只是研究TensorFlow并遇到了这个问题-当我将tf.dataset输入到model.fit时收到错误

ValueError:检查输入时出错:预期conv2d_input具有4维,但数组的形状为(28,28,1)

我知道当参数data_format为“ channels_last”时,它应该是4D张量-(项目,行,列,通道的数量)。但是我只有tf.Ddataset,它是在加载数据集后收到的。因此,我的问题是如何将tf.Dataset转换为4D张量以便能够将其馈送到模型中?有人可以向我显示代码或指向合适的文章吗?这是我的代码

import tensorflow as tf
import tensorflow_datasets as tfds

builder = tfds.builder('mnist')
builder.download_and_prepare()

(raw_train, raw_test) = builder.as_dataset(split=[tfds.Split.TRAIN, tfds.Split.TEST])

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation=tf.keras.activations.sigmoid, input_shape=(28, 28 ,1)))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=5, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
# model.add(Flatten())
model.add(tf.keras.layers.Dense(120, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(84, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(10))

result = model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

# model.summary()
train_images = []
train_labels = []
for i in raw_train:
    image = i["image"]
    # image = image.reshape(-1,28, 28, 1) 
    train_images.append(image)

    label = i["label"]
    train_labels.append(label)

model.fit(train_images, train_labels, epochs=10)

更新:

我已经尝试将建议转换列表转换为numpy.array。我尝试了它,但无法等待结果,因为它只占用了1个CPU内核,并在100%上使用它。我花了30分钟,但尚未收到结果。我认为应该有其他更正确的方法。我正在研究批处理和预取方面,但是无论如何我都不知道如何获得结果

新代码看起来像这样

import tensorflow as to
tf.debugging.set_log_device_placement(True)
import tensorflow_datasets as tfds
import numpy as np

builder = tfds.builder('mnist')
builder.download_and_prepare()

(raw_train, raw_test) = builder.as_dataset(split=[tfds.Split.TRAIN, tfds.Split.TEST])

raw_train = raw_train.batch(128).prefetch(128)
raw_test = raw_test.batch(128).prefetch(128)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation=tf.keras.activations.sigmoid, input_shape=(28, 28 ,1)))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=5, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
        # model.add(Flatten())
model.add(tf.keras.layers.Dense(120,activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(84,activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(10))

result = model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                      metrics=['accuracy'])

train_images = []
train_labels = []
for i in raw_train:
   image = i["image"]
   train_images.append(image)

   label = i["label"]
   train_labels.append(label)

with tf.device('/GPU:0'):
   model.fit(train_images, train_labels, epochs=10)

我仍然有错误:ValueError:检查模型目标时出错:传递给模型的Numpy数组列表不是模型期望的大小。对于输入['dense_5'],预计会看到1个数组,但得到了以下469个数组的列表:[

python tensorflow tensorflow2.0 tensorflow-datasets
2个回答
0
投票

这不是完全正确的答案……因为它解决了我的问题,但又带来了另一个问题)首先,我认为将张量列表转换为一个4D张量不是最好的主意,因为它是使用NumPy并使用一个核完成的。我的CPU-我无法等待此操作的结果。我发现还有一种我更喜欢的方法,从我的角度来看,它看起来更好-它使用的是tf.dataset.map然后是批处理,然后是预取函数。我还不知道如何使用它们,但是我正在挖掘这一方面。目前,我认为这是更正确和更透视的方式。

import tensorflow as tf
tf.debugging.set_log_device_placement(True)
import tensorflow_datasets as tfds
import numpy as np


builder = tfds.builder('mnist')
builder.download_and_prepare()

(raw_train, raw_test) = builder.as_dataset(split=[tfds.Split.TRAIN, tfds.Split.TEST],shuffle_files=False)

def divide(record):
    image = record["image"]
    label = record["label"]
    return image,label

train_ds = raw_train.map(divide , num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(128).prefetch(128)
test_ds = raw_test.map(divide , num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(128).prefetch(128)
print(type(train_ds))

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(5,5), activation=tf.keras.activations.sigmoid, input_shape=(28, 28 ,1)))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=(5,5), activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(120, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(84, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(10))

model.summary()

# model = tf.keras.Sequential([
#     tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
#     tf.keras.layers.Dense(128, activation='relu'),
#     tf.keras.layers.Dense(10)
# ])

result = model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

# for image_batch, label_batch in train_ds.take(1):
#     print("Image shape: ", image_batch.numpy().shape)
#     print("Label: ", label_batch.numpy())

image_batch, label_batch = next(iter(train_ds))

print(image_batch.shape)
print(label_batch.shape)

with tf.device('/GPU:0'):
    model.fit(image_batch, label_batch, epochs=10)

此代码也不起作用,但我认为这是因为该模型需要32x32而不是我提供给该模型的28x28。我仍在工作-如果您知道如何解决问题以及如何批量操作,请告诉我。感谢您的建议。


0
投票

尚未完成,但我有下一步。为了使上一个代码起作用,我添加了one_hot_y = tf.one_hot(label_batch,10)

import tensorflow as tf
tf.debugging.set_log_device_placement(True)
import tensorflow_datasets as tfds
import numpy as np
from time import time

builder = tfds.builder('mnist')
builder.download_and_prepare()

(raw_train, raw_test) = builder.as_dataset(split=[tfds.Split.TRAIN, tfds.Split.TEST],shuffle_files=False)

def divide(record):
    image = record["image"]
    image = tf.image.resize_with_pad(image, 32,32)
    label = record["label"]
    return image,label

train_ds = raw_train.map(divide, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(128).prefetch(128)
test_ds = raw_test.map(divide , num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(128).prefetch(128)
print(type(train_ds))

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(5,5), activation=tf.keras.activations.sigmoid, input_shape=(32, 32 ,1)))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=(5,5), activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.AveragePooling2D(pool_size=(2, 2), strides=2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(120, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(84, activation=tf.keras.activations.sigmoid))
model.add(tf.keras.layers.Dense(10))

model.summary()

result = model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

# for image_batch, label_batch in train_ds.take(1):
#     print("Image shape: ", image_batch.numpy().shape)
#     print("Label: ", label_batch.numpy())

image_batch, label_batch = next(iter(train_ds))

print(image_batch.shape)
print(label_batch.shape)

one_hot_y = tf.one_hot(label_batch, 10)
print(one_hot_y.shape)

with tf.device('/GPU:0'):
    model.fit(image_batch, one_hot_y, epochs=10)

下一步是找出使用批处理遍历数据集中的所有项目的正确步骤。

© www.soinside.com 2019 - 2024. All rights reserved.