PyTorch:CNN 模型无法识别批量大小

问题描述 投票:0回答:1

我需要构建一个不是在 Pytorch 上预先构建的数据集,我使用的图像来自 URFD 数据集,但我需要首先使用 PCWNet 计算光流,因为这是一项艰巨的任务,我使用 MNIST写入数字只是为了测试我的模型,问题是在我构建数据集之后,模型不会将批次识别为批次,而是在构建数据集之后将其识别为数据维度

` train_dat = torch.utils.data.TensorDataset(torch.tensor(X_train_sample_).to(device), torch.tensor(y_train).to(device))

 test_dat = torch.utils.data.TensorDataset(torch.tensor(X_test_sample_).to(device), torch.tensor(y_test).to(device))

batch_size = 32
train_dataloader = torch.utils.data.DataLoader(train_dat, 
                                           shuffle=True,
                                           num_workers = 8,
                                           batch_size = batch_size)
test_dataloader = torch.utils.data.DataLoader(test_dat, 
                                          shuffle=True,
                                          num_workers = 8,
                                          batch_size = batch_size)`

我检查一切是否正常

` 对于 i,enumerate(train_dataloader, 0) 中的数据: 输入,标签=数据 打印(一) 打印(输入.形状) 打印(标签.形状) 打破

torch.Size([32, 1, 28, 28])
torch.Size([32, 1, 10])`

一批 32 张预期的 1x28x28 图像及其对应的标签

这是模型

` CNN 类(nn.Module):

def __init__(self):
    super(CNN, self).__init__()

    # Setting up the Sequential of CNN Layers
    self.cnn1 = nn.Sequential(
        nn.Conv2d(1, 64, kernel_size=3,stride=1, padding_mode='zeros'),
        nn.BatchNorm1d(26, track_running_stats=True),
        nn.ReLU(inplace=True),

        nn.Conv2d(64, 128, kernel_size=3,stride=1, padding_mode='zeros'),
        nn.BatchNorm1d(24, track_running_stats=True),
        nn.ReLU(inplace=True),

        nn.Conv2d(128, 256, kernel_size=3,stride=1, padding_mode='zeros'),
        nn.BatchNorm1d(22, track_running_stats=True),
        nn.ReLU(inplace=True),

        nn.Conv2d(256, 512, kernel_size=3,stride=1, padding_mode='zeros'),
        nn.BatchNorm1d(20, track_running_stats=True),
        nn.ReLU(inplace=True),

        nn.Conv2d(512, 1024, kernel_size=3,stride=1, padding_mode='zeros'),
        nn.BatchNorm1d(18, track_running_stats=True),
        nn.ReLU(inplace=True),

        nn.AdaptiveAvgPool2d((1,1)),

        nn.Flatten()

    )

    # Setting up the Fully Connected Layers
    self.fc1 = nn.Sequential(
        nn.Linear(1024, 64),
        nn.ReLU(inplace=True),

        nn.Linear(64, 10),
        nn.ReLU(inplace=True)
    )

def forward_once(self, x):
    # This function will be called for both images
    # It's output is used to determine the similiarity
    # x = x.reshape(1,28,28)
    output = self.cnn1(x)
    output = torch.transpose(output, 0, 1)
    # output = output.view(output.size()[0], -1)
    output = self.fc1(output)
    return output

def forward(self, input1):
    # In this function we pass in both images and obtain both vectors
    # which are returned
    output1 = self.forward_once(input1)
    # output2 = self.forward_once(input2)

    return output1 #, output2

net = CNN().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr = 0.0005 )`

训练循环

`counter = []
 loss_history = []
 iteration_number= 0
 from datetime import datetime
 from torch.utils.tensorboard import SummaryWriter

 n_total_steps = len(train_dataloader)
 for epoch in range(8):

     # Iterate over batches
     i = 0
     net.train(True)
     for i, (img, label) in enumerate(train_dataloader, 0):

       # for i in range(len(img)):
       # Zero the gradients
       # Pass in the two images into the network and obtain two outputs
       label = label.to(device)
       # img = img.reshape(32,1,28,28)
       img = img.float()
       img = img.to(device)
       # label = label.float()
       output1 = net(img)
       # Pass the outputs of the networks and label into the loss function

       loss_contrastive = criterion(output1, label)

       optimizer.zero_grad()
       # Calculate the backpropagation
       loss_contrastive.backward()

       # Optimize
       optimizer.step()

       # Every 10 batches print out the loss
       if i % 10 == 0 :
           print(f"Epoch number {epoch}\n Current loss {loss_contrastive.item()}\n")
           iteration_number += 10

           counter.append(iteration_number)
           loss_history.append(loss_contrastive.item())
       i +=1`

它给出错误 ValueError Expected 2D or 3D input (got 4D input) enter image description here

这里是完整的 Colab 链接 如果更容易理解,请仅查看

我尝试构建训练循环,就像我从其他 pytorch 示例中看到的那样,但我看不出哪里出了问题

pytorch dataset pytorch-dataloader
1个回答
0
投票

我尝试使用批量大小为 1,模型确实进行了训练,但损失几乎没有变化,据我了解,它唯一的影响是它学习速度慢一点,但我在其他训练中使用了这个模型设置并且训练按正常批量大小按预期进行

© www.soinside.com 2019 - 2024. All rights reserved.