使用Tensorflow 2.0的扩张因果卷预测时间序列值

问题描述 投票:1回答:1

我的任务是使用Tensorflow 2.0.0和Python 3.6,根据之前的200个时间步长来预测下一个时间序列值(类似于WaveNet)。我得到以下错误信息。

ValueError: 我的训练数据出现了形状不匹配的问题,我得到了以下错误信息:ValueError: A target array with shape (495, 1, 1) was passed for an output of shape (None, 200, 1) while using as loss. mean_squared_error. 这个损失希望目标与输出的形状相同。

我的代码。

import tensorflow as tf
import tensorflow.keras as k
import numpy as np

batch_size = 495
epochs = 5
learning_rate = 0.001
dilations = 7
seq_length=200

class TCNBlock(k.Model):
    def __init__(self, dilation, seq_length):
        super(TCNBlock, self).__init__()
        self.seq_length = seq_length

        self.convolution0 = k.layers.Conv1D(8, kernel_size=4, strides=1, padding='causal', dilation_rate=dilation)
        self.BatchNorm0 = k.layers.BatchNormalization(momentum=0.6)
        self.relu0 = k.layers.ReLU()
        self.dropout0 = k.layers.Dropout(rate=0.2)

        self.convolution1 = k.layers.Conv1D(8, kernel_size=4, strides=1, padding='causal', dilation_rate=dilation)
        self.BatchNorm1 = k.layers.BatchNormalization(momentum=0.6)
        self.relu1 = k.layers.ReLU()
        self.dropout1 = k.layers.Dropout(rate=0.2)
        self.residual = k.layers.Conv1D(1, kernel_size=1, padding='same')


    def build_block(self, dilation, training=False):
        inputs = k.Input(shape=(200, 1))
        output_layer1 = self.convolution0(inputs)
        output_layer2 = self.BatchNorm0(output_layer1)
        output_layer3 = self.relu0(output_layer2)
        output_layer4 = self.dropout0(output_layer3, training)
        output_layer5 = self.convolution1(output_layer4)
        output_layer6 = self.BatchNorm1(output_layer5)
        output_layer7 = self.relu1(output_layer6)
        output = self.dropout1(output_layer7, training)
        residual = self.residual(output)
        outputs = k.layers.add([inputs, residual])

        return k.models.Model(inputs=inputs, outputs=outputs)


def build_model():
    mdl = k.models.Sequential()
    for dilation in range(dilations):
        dilation_actual = int(np.power(2, dilation))
        block = TCNBlock(dilation_actual, seq_length).build_block(dilation_actual)
        mdl.add(block)
    return mdl


Model_complete = build_model()
opt = k.optimizers.Adam(learning_rate=learning_rate)
Model_complete.compile(loss='mean_squared_error', optimizer=opt, metrics=["accuracy"])

# Train Model
training_process = Model_complete.fit(x_train, y_train, epochs=epochs, verbose=1, batch_size=495, validation_split=0.1)

我的数据有以下形状

x_train.shape = (495, 200, 1) 
y_train.shape = (495, 1, 1)

我将感激任何帮助和建议.谢谢你!

python tensorflow shapes
1个回答
1
投票

重要的一步 Time Series Analysis Prediction 是为了准备数据,使模型使各自的。Prediction.

Function 该执行 Important Step 如下图所示。

def multivariate_data(dataset, target, start_index, end_index, history_size,
                      target_size, step, single_step=False):
  data = []
  labels = []

  start_index = start_index + history_size
  if end_index is None:
    end_index = len(dataset) - target_size

  for i in range(start_index, end_index):
    indices = range(i-history_size, i, step)
    data.append(dataset[indices])

    if single_step:
      labels.append(target[i+target_size])
    else:
      labels.append(target[i:i+target_size])

  return np.array(data), np.array(labels)

上述函数中需要解释的重要参数是: history_sizetarget_size.

在你的情况下。history_size 将是200和 target_size 将1。

我有一个真诚的建议给你(请勿见怪。本意是真心帮助你)。由于你是比较陌生的 Tensorflow 我要求你通过 时间序列教程 Tensorflow网站中给出的,它是用 LSTMs,完全理解它,然后尝试使用以下方法实现它 Dilated Convolutions.

希望对大家有所帮助。学习愉快!

© www.soinside.com 2019 - 2024. All rights reserved.