(pytorch / mse)如何更改张量的形状?

问题描述 投票:0回答:1

问题定义:

我必须使用MSELoss函数来定义分类损失问题。因此,它总是说关于张量形状的错误消息。

整个错误消息:

torch.Size([32,10])torch.Size([32]) -------------------------------------------------- ------------------------- RuntimeError Traceback(最近一次调用 最后) 53输出= model.forward(图片) 54个print(output.shape,labels.shape) ---> 55损失=准则(输出,标签) 56 loss.backward() 57 Optimizer.step()

/ opt / conda / lib / python3.7 / site-packages / torch / nn / modules / module.py在 call((自我,*输入,** kwargs) 530结果= self._slow_forward(* input,** kwargs) 其他531 -> 532结果= self.forward(* input,** kwargs) 533 for self._forward_hooks.values()的钩子: 534 hook_result = hook(自身,输入,结果)

/ opt / conda / lib / python3.7 / site-packages / torch / nn / modules / loss.py在 前进(自己,输入,目标) 429 430 def前进(自身,输入,目标): -> 431 return F.mse_loss(input,target,reduction = self.reduction) 432 433

/ opt / conda / lib / python3.7 / site-packages / torch / nn / functional.py在 mse_loss(输入,目标,size_average,减少,减少)2213 ret = torch.mean(ret)如果减少=='mean'否则torch.sum(ret) 2214其他: -> 2215扩展输入,扩展目标=火炬。广播张量(输入,目标)2216 ret = 火炬_C._nn.mse_loss(扩展输入,扩展目标, _Reduction.get_enum(reduction))2217返回ret

/ opt / conda / lib / python3.7 / site-packages / torch / functional.py在 broadcast_tensors(*张量) 50 [0,1,2]]) 51“”“ ---> 52返回割炬._C._VariableFunctions.broadcast_tensors(张量) 53 54

> RuntimeError:张量a(10)的大小必须与张量的大小匹配 b(32)在非单维度1

如何重塑张量,以及应该改变哪个张量(输出或标签)以计算损失?

完整代码附在下面。

import numpy as np
import torch

# Loading the Fashion-MNIST dataset
from torchvision import datasets, transforms

# Get GPU Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('MNIST_data/', download = True, train = True, transform = transform)
testset = datasets.FashionMNIST('MNIST_data/', download = True, train = False, transform = transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size = 32, shuffle = True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size = 32, shuffle = True, num_workers=4)

# Examine a sample
dataiter = iter(trainloader)
images, labels = dataiter.next()

# Define the network architecture
from torch import nn, optim
import torch.nn.functional as F

model = nn.Sequential(nn.Linear(784, 128),
                      nn.ReLU(),
                      nn.Linear(128, 10),
                      nn.LogSoftmax(dim = 1))
model.to(device)

# Define the loss
criterion = nn.MSELoss()

# Define the optimizer
optimizer = optim.Adam(model.parameters(), lr = 0.001)

# Define the epochs
epochs = 5

train_losses, test_losses = [], []

for e in range(epochs):
  running_loss = 0
  for images, labels in trainloader:
    # Flatten Fashion-MNIST images into a 784 long vector
    images = images.to(device)
    labels = labels.to(device)
    images = images.view(images.shape[0], -1)

    # Training pass
    optimizer.zero_grad()
    output = model.forward(images)
    print(output.shape, labels.shape)
    loss = criterion(output, labels)
    loss.backward()
    optimizer.step()

    running_loss += loss.item()
  else:
    test_loss = 0
    accuracy = 0

    # Turn off gradients for validation, saves memory and computation
    with torch.no_grad():
      # Set the model to evaluation mode
      model.eval()

      # Validation pass
      for images, labels in testloader:
        images = images.to(device)
        labels = labels.to(device)
        images = images.view(images.shape[0], -1)
        ps = model(images)
        test_loss += criterion(ps, labels)
        top_p, top_class = ps.topk(1, dim = 1)
        equals = top_class == labels.view(*top_class.shape)
        accuracy += torch.mean(equals.type(torch.FloatTensor))

    model.train()

    print("Epoch: {}/{}..".format(e+1, epochs),
          "Training loss: {:.3f}..".format(running_loss/len(trainloader)),
          "Test loss: {:.3f}..".format(test_loss/len(testloader)),
          "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
pytorch mnist mse
1个回答
0
投票

从错误之前打印的输出,torch.Size([32, 10]) torch.Size([32])

[左边是模型给您的,右边是来自trainloader,通常您将其用于nn.CrossEntropyLoss

并且从完整的错误日志中,错误来自此行

loss = criterion(output, labels)

使这项工作有效的方法称为“一站式编码”,如果是我,这是我的懒惰,我会这样写。

ones = torch.sparse.torch.eye(10).to(device)  # number of class class
labels = ones.index_select(0, labels)
    
© www.soinside.com 2019 - 2024. All rights reserved.