RuntimeError: 堆栈期望每个张量大小相等,但在0条目处得到[32, 1],在1条目处得到[32, 0]。

问题描述 投票:1回答:1

我有一个非常大的形状的张量 (512,3,224,224). 我把它分批输入到模型中,每批32个,然后我保存与目标标签对应的分数,目标标签是 2......在每一次迭代中,每一次切片后,形状为 scores 变化。这就导致了下面的错误。我到底做错了什么,如何解决。label = torch.ones(1)*2

def sub_forward(self, x):
    x = self.vgg16(x)
    x = self.bn1(x)
    x = self.linear1(x)
    x = self.linear2(x)
    return x

def get_scores(self, imgs, targets):
    b, _, _, _ = imgs.shape
    batch_size = 32
    total_scores = []
    for i in range(0, b, batch_size):
      scores = self.sub_forward(imgs[i:i+batch_size,:,:,:])
      scores = F.softmax(scores)
      labels = targets[i:i+batch_size]
      labels = labels.long()
      scores = scores[:,labels]
      print(i," scores: ", scores)
      total_scores.append(scores)
      print(i," total_socres: ", total_scores)
    total_scores = torch.stack(total_scores)
    return scores
0  scores:  tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0')
0  total_socres:  [tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0')]
32  scores:  tensor([], device='cuda:0', size=(32, 0))
32  total_socres:  [tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0))]
> RuntimeError: stack expects each tensor to be equal size, but got [32, 1] at entry 0 and [32, 0] at entry 1
python deep-learning pytorch tensor
1个回答
1
投票

我不知道你的代码发生了什么,但你不应该做这样的批处理,说实话。请使用Dataset。

import torch

class MyDataloader(torch.utils.data.Dataset):
    def __init__(self):
        self.images = torch.Tensor(512, 3, 224, 224)

    def __len__(self):
        return 512

    def __getitem__(self, idx):
        return self.images[idx, :, :, :], torch.ones(1) * 2

train_data = MyDataloader()
train_loader = torch.utils.data.DataLoader(train_data,
                                           shuffle=True,
                                           num_workers=2,
                                           batch_size=32)
for batch_images, targets in train_loader:
    print(batch_images.shape)  # should be 32*3*224*224

    ... # let train your model
    logits = model(batch_images, targets)

© www.soinside.com 2019 - 2024. All rights reserved.