PyTorch:在训练期间验证的准确性大于100%

问题描述 投票:1回答:1

1)问题

我在训练期间观察到一种奇怪的行为,我的验证精度从一开始就高于100%。

Epoch 0/3
----------
100%|██████████| 194/194 [00:50<00:00,  3.82it/s]
train Loss: 1.8653 Acc: 0.4796
100%|██████████| 194/194 [00:32<00:00,  5.99it/s]
val Loss: 1.7611 Acc: 1.2939

Epoch 1/3
----------
100%|██████████| 194/194 [00:42<00:00,  4.61it/s]
train Loss: 0.8704 Acc: 0.7467
100%|██████████| 194/194 [00:31<00:00,  6.11it/s]
val Loss: 1.0801 Acc: 1.4694

输出表明一个时期迭代超过194个批次,这对于训练数据(其长度为6186,batch_size为32,因此32 * 194 = 6208,这是≈6186)似乎是正确的但是不匹配验证数据的大小(长度为3447,batch_size = 32)。

因此,我希望我的验证循环生成108个(3447/32≈108)批次,其中194个。

我认为这个行为是在我的for循环中处理的:

for dataset in tqdm(dataloaders[phase]):

但不知何故,我无法弄清楚这里有什么问题。请参阅下面的第3点,了解我的整个代码。

2)问题

如果我的上述假设是正确的,即此错误源于我的代码中的for循环,那么我想知道以下内容:

如何在验证阶段调整for循环以正确处理用于验证的批次数?

3)背景:

下面两个教程,一个关于如何进行转移学习(https://discuss.pytorch.org/t/transfer-learning-using-vgg16/20653)和一个如何在pytorch中进行数据加载(https://pytorch.org/tutorials/beginner/data_loading_tutorial.html),我试图自定义代码,以便我可以在新的自定义数据集上执行传输学习我想通过pandas数据帧提供。

因此,我的训练和验证数据是通过两个数据框(df_traindf_val)提供的,这两个数据框都包含两列,一列用于路径,一列用于目标。例如。像这样:

    url                                 target
0   C:/Users/aaron/Desktop/pics/4ebd... 9
1   C:/Users/aaron/Desktop/pics/7153... 3
2   C:/Users/aaron/Desktop/pics/3ee6... 3
3   C:/Users/aaron/Desktop/pics/4652... 16
4   C:/Users/aaron/Desktop/pics/28ce... 15
...

各自的长度:

print(len(df_train))
print(len(df_val))
>> 6186
>> 3447

我的管道看起来像这样:

class CustomDataset(Dataset):
    def __init__(self, df, transform=None):

        self.dataframe = df_train
        self.transform = transform

    def __len__(self):
        return len(self.dataframe)

    def __getitem__(self, idx):
        img_name = self.dataframe.iloc[idx, 0]
        img = Image.open(img_name)
        img_normalized = self.transform(img)

        landmarks = self.dataframe.iloc[idx, 1]
        sample = {'data': img_normalized, 'label': int(landmarks)}

        return sample

train_dataset = CustomDataset(df_train,transform=transforms.Compose([
                                                transforms.Resize(224),
                                               transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))

val_dataset = CustomDataset(df_val,transform=transforms.Compose([
                                                transforms.Resize(224),
                                               transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))

train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=32,shuffle=True, num_workers=0)
val_loader = torch.utils.data.DataLoader(val_dataset,batch_size=32,shuffle=True, num_workers=0)

dataloaders = {'train': train_loader, 'val': val_loader}
dataset_sizes = {'train': len(df_train) ,'val': len(df_val)}


################### Training

from tqdm import tqdm

def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
    since = time.time()

    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                scheduler.step()
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for dataset in tqdm(dataloaders[phase]):

                inputs, labels = dataset["data"], dataset["label"]
                #print(inputs.type())
                inputs = inputs.to(device, dtype=torch.float)
                labels = labels.to(device,dtype=torch.long)

                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)

            epoch_loss = running_loss / dataset_sizes[phase]
            epoch_acc = running_corrects.double() / dataset_sizes[phase]

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(
                phase, epoch_loss, epoch_acc))

            # deep copy the model
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    # load best model weights
    model.load_state_dict(best_model_wts)
    return model


device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, len(le.classes_))

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
                       num_epochs=4)
python pytorch
1个回答
2
投票

您的问题似乎在这里:

class CustomDataset(Dataset):
    def __init__(self, df, transform=None):
>>>>>        self.dataframe = df_train

这应该是

             self.dataframe = df

在你的情况下,你无意中将火车和val CustomDataset设置为df_train ...

© www.soinside.com 2019 - 2024. All rights reserved.