torch/reference/detection/utils.MetricLogger.log_every(..) 产生 3 个张量但 2 个预期

问题描述 投票:0回答:0

火炬 v0.3.0,

大家好,我正在尝试使用 Mask R-CNN 实现迁移学习 当出现这个异常时:

engine.py", line 26, in train_one_epoch
    for images, targets in metric_logger.log_every(data_loader, print_freq, header):
ValueError: too many values to unpack (expected 2)

我查看了 vision 的 repo 的来源,这里是一个片段 pytorch 的 repo 中提供的 log_every(...) 函数:

    def log_every(self, iterable, print_freq, header=None):
        i = 0
        if not header:
            header = ''
        start_time = time.time()
        end = time.time()
        iter_time = SmoothedValue(fmt='{avg:.4f}')
        data_time = SmoothedValue(fmt='{avg:.4f}')
        space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
        log_msg = self.delimiter.join([
            header,
            '[{0' + space_fmt + '}/{1}]',
            'eta: {eta}',
            '{meters}',
            'time: {time}',
            'data: {data}',
            'max mem: {memory:.0f}'
        ])
        MB = 1024.0 * 1024.0
        for obj in iterable:
            data_time.update(time.time() - end)
            yield obj
            iter_time.update(time.time() - end)
            if i % print_freq == 0 or i == len(iterable) - 1:
                eta_seconds = iter_time.global_avg * (len(iterable) - i)
                eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
                print(log_msg.format(
                    i, len(iterable), eta=eta_string,
                    meters=str(self),
                    time=str(iter_time), data=str(data_time),
                    memory=torch.cuda.max_memory_allocated() / MB))
            i += 1
            end = time.time()
        total_time = time.time() - start_time
        total_time_str = str(datetime.timedelta(seconds=int(total_time)))
        print('{} Total time: {} ({:.4f} s / it)'.format(
            header, total_time_str, total_time / len(iterable)))

返回的 obj 似乎是 3 个张量,它们是包含一对对应于框架的 2D 数组的元组

这是我的火车代码:

dataset_train = hd.HelmetDataset(DATA_PATH, is_train=True, transform=get_transform(train=True))
    dataset_test = hd.HelmetDataset(DATA_PATH, is_train=False, transform=get_transform(train=False))


    # define training and validation data loaders
    data_loader_train = torch.utils.data.DataLoader(
        dataset_train, batch_size=2, num_workers=4,
        collate_fn=utils.collate_fn)

    data_loader_test = torch.utils.data.DataLoader(
        dataset_test, batch_size=1,  num_workers=4,
        collate_fn=utils.collate_fn)

    device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')


    # BG + 3 classes
    num_classes = 1 + 3
    # get the model using our helper function
    model = build_model(num_classes)
    # move model to the right device
    model.to(device)

    # freeze all layers except the mask rcnn predictor and fast rcnn predictor
    for name, param in model.named_parameters():
        param.requires_grad = "mask_predictor" in name or "box_predictor" in name

    # construct an optimizer
    params = [p for p in model.parameters() if p.requires_grad]
    optimizer = torch.optim.SGD(params,
                                lr=0.0001,
                                momentum=0.9,
                                weight_decay=0.0005)
    # and a learning rate scheduler
    lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                                   step_size=3,
                                                    gamma=0.1)

    # let's train it for 10 epochs
    num_epochs = 10

    for epoch in range(num_epochs):
        # train for one epoch, printing every 10 iterations
        train_one_epoch(model, optimizer, data_loader_train, device, epoch, print_freq=10)
        # update the learning rate
        lr_scheduler.step()
        # evaluate on the test dataset
        evaluate(model, data_loader_test, device=device)

提前致谢!

python pytorch torch vision torchvision
© www.soinside.com 2019 - 2024. All rights reserved.