PyTorch的LSTM中的Input_size错误:RuntimeError:shape'[10,30,1]'对于大小为150的输入无效

问题描述 投票:0回答:3

大家,我使用LSTM来预测某一天的股票指数,使用前30天的股票指数作为输入。我想在这个例子中,LSTM输入的大小应该是[10,30,1],所以我使用t_x=x.view(10,30,1)来重塑输入。但是当我运行下面的代码时有一个RuntimeError(shape '[10, 30, 1]' is invalid for input of size 150),你能帮我找一下这个问题吗?谢谢:)

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torch.utils.data import TensorDataset


dataread_df=pd.read_csv('D:/Desktop/399300.csv')
dataread_series=pd.Series(dataread_df['value'].values)
plt.plot(dataread_series)
plt.show()


def generate_data_df(series, n):
    if len(series) <= n:
        raise Exception("The Length of series is %d, while affect by (n=%d)." % (len(series), n))
    df = pd.DataFrame()
    for i in range(n):
        df['x%d' % i] = series.tolist()[i:-(n - i)]
    df['y'] = series.tolist()[n:]
    return df
data_df = generate_data_df(dataread_series, 30)

data_numpy=np.array(data_df)
mean=np.mean(data_numpy)
std=np.std(data_numpy)
data_numpy = (data_numpy-mean)/std
train_size=int(len(data_numpy)*0.7)
test_size=len(data_numpy)-train_size
trainset_np=data_numpy[:train_size]
testset_np=data_numpy[train_size:]
train_x_np=trainset_np[:,:30]
train_y_np=trainset_np[:,30:]
test_x_np=testset_np[:,:30]
test_y_np=testset_np[:,30:]

train_x=torch.Tensor(train_x_np)
train_y=torch.Tensor(train_y_np)
test_x=torch.Tensor(test_x_np)
test_y=torch.Tensor(test_y_np)
trainset=TensorDataset(train_x,train_y)
testset=TensorDataset(test_x,test_y)
trainloader = DataLoader(trainset, batch_size=10, shuffle=True)
testloader=DataLoader(testset,batch_size=10,shuffle=True)

class Net(nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        self.rnn=nn.LSTM(input_size=1,hidden_size=64,num_layers=1,batch_first=True)
        self.out=nn.Sequential(nn.Linear(64,1))
    def forward(self,x):
        r_out,(h_n,h_c)=self.rnn(x,None)
        out=self.out(r_out[:,-1,:])
        return out
rnn = Net()
print(rnn)

optimizer = torch.optim.Adam(rnn.parameters(), lr=0.0001)  
criterion = nn.MSELoss()
train_correct=0
test_correct=0
train_total=0
test_total=0
prediction_list=[]

for epoch in range(10):
    running_loss_train=0
    running_loss_test=0
    for i,(x1,y1) in enumerate(trainloader):
        t_x1=x1.view(10,30,1)
        output=rnn(t_x1)
        loss_train=criterion(output,y1)
        optimizer.zero_grad() 
        loss_train.backward() 
        optimizer.step()
        running_loss_train+=loss_train.item()
    for i,(x2,y2) in enumerate(testloader):
        t_x2=x2.view(10,30,1)
        prediction=rnn(t_x2)
        loss_test=criterion(prediction,y2)
        running_loss_test+=loss_test.item()
        prediction_list.append(prediction)
    print('Epoch {} Train Loss:{}, Test Loss:{}'.format(epoch+1,running_loss_train,running_loss_test))
    prediction_list_plot=np.array(prediction_list)
    plt.plot(test_y_np.flatten(),'r-',linewidth=0.1,label='real data')
    plt.plot(prediction_list_plot.flatten(),'b-',linewidth=0.1,label='predicted data')
    plt.show()
print('Finish training')

RuntimeError:

RuntimeError                              Traceback (most recent call last)
<ipython-input-3-fb8cb4c93775> in <module>
     71     running_loss_test=0
     72     for i,(x1,y1) in enumerate(trainloader):
---> 73         t_x1=x1.view(10,30,1)
     74         output=rnn(t_x1)
     75         loss_train=criterion(output,y1)

RuntimeError: shape '[10, 30, 1]' is invalid for input of size 150
python lstm pytorch recurrent-neural-network
3个回答
0
投票

改变这个

t_x1=x1.view(10,30,1)

t_x1=x1.view(150,150,1)

并尝试一下


0
投票

从view()方法的文档中,'返回的张量共享相同的数据,并且必须具有相同数量的元素,但可能具有不同的大小。

x1 = torch.randn((150,))
t_x1 = x1.view(10,30,1)

RuntimeError:shape'[10,30,1]'对于大小为150的输入无效

这是因为150!= 10 * 30.如果你想使用30个时间步长,那么你的样本量应该是150/30 = 5.所以,正确的方法是

t_x1 = x1.view(5,30,1)

0
投票

鉴于您使用batch_first=True并假设批量大小为10,(10, 30, 1)作为输入的形状是正确的,因为它是(batch_size, seq_len, input_size)

问题是150来自哪里。在尝试应用x1之前,.view(...)的形状是什么?你能检查以下内容:

for i,(x1,y1) in enumerate(trainloader):
    print(x1.shape)
    ...

直觉上,它应该像(10, ???),因为你将10设置为批量大小。现在我假设你训练和测试数据的某些东西是关闭的。

© www.soinside.com 2019 - 2024. All rights reserved.