属性错误:在segmentation_models_pytorch中使用 smp.Unet 和 aux_params 时,“元组”对象没有属性“大小”

问题描述 投票:0回答:2

我正在开发一个涉及使用 Python 中的 smp (segmentation_models_pytorch) 库进行语义分割的项目。我正在尝试使用 smp.Unet 类训练带有辅助参数的 UNet 模型。但是,当我将 aux_params 参数添加到 smp.Unet 构造函数时,遇到错误:

File .../python3.11/site-packages/segmentation_models_pytorch/utils/train.py:51, in Epoch.run(self, dataloader)
     49 for x, y in iterator:
     50     x, y = x.to(self.device), y.to(self.device)
---> 51     loss, y_pred = self.batch_update(x, y)
     53     # update loss logs
     54     loss_value = loss.cpu().detach().numpy()
...
-> 3162 if not (target.size() == input.size()):
   3163     raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
   3165 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
File ".../train_model.py", line 153, in train
    train_logs = self.train_epoch.run(self.train_loader)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".../train_model.py", line 173, in main
    water_seg_model.train(epoch_number=100)
  File ".../train_model.py", line 176, in <module>
    main()
AttributeError: 'tuple' object has no attribute 'size'

这是我的代码的简化版本:

ENCODER = 'resnet34'
ENCODER_WEIGHTS = 'imagenet'
CLASSES = ['cats']
ACTIVATION = None
DROPOUT = 0.5
POOLING = 'avg'
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
THRESHOLD = 0.9
LEARNING_SPEED = 0.001


AUX_PARAMS = dict(
    classes=len(CLASSES),
    dropout=DROPOUT,
    activation=ACTIVATION,
    pooling=POOLING
)

class SegmentationModel():
    def __init__(self):
        self.model = smp.Unet(
            encoder_name=ENCODER,
            encoder_weights=ENCODER_WEIGHTS,
            in_channels=3,
            classes=len(CLASSES),
            aux_params=AUX_PARAMS
        )
        self.preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)

        self.loss = smp.losses.SoftBCEWithLogitsLoss()
        self.loss.__name__ = 'SoftBCEWithLogitsLoss'

        self.metrics = [
            smp.utils.metrics.IoU(threshold=THRESHOLD),
        ]

        self.optimizer = torch.optim.Adam([ 
            dict(params=self.model.parameters(), lr=0.0001), 
        ])

        self.train_epoch = smp.utils.train.TrainEpoch(
            self.model, 
            loss=self.loss, 
            metrics=self.metrics, 
            optimizer=self.optimizer,
            device=DEVICE,
            verbose=True,
        )
        self.dataset = Dataset(
            self.images_train_dir, 
            self.masks_train_dir, 
            augmentation=get_training_augmentation(), 
            preprocessing=get_preprocessing(self.preprocessing_fn),
            classes=['cats'],
        )
        self.train_loader = DataLoader(self.train_dataset, batch_size=16, shuffle=True, num_workers=6)

    def train(self, epoch_number: 10):
        for i in range(0, epoch_number):
            print('\nEpoch: {}'.format(i))
            train_logs = self.train_epoch.run(self.train_loader)
def main():
    cats_seg_model = SegmentationModel()
    cats_seg_model.train(epoch_number=100)

在 smp.Unet 中使用 aux_params 参数时,可能导致“tuple”对象没有属性“size”错误的原因是什么? 如何使用 aux_params 字典正确初始化 smp.Unet 模型以避免此错误?

对此问题的任何帮助或见解将不胜感激。谢谢!

python pytorch image-segmentation attributeerror
2个回答
0
投票

self.train_dataset 未定义


0
投票

来自 smp 文档: 所有型号均支持 aux_params 参数,默认设置为 None。如果

aux_params = None
则不会创建分类辅助输出,否则 模型不仅产生掩模,而且还产生形状为 NC 的标签输出。分类头由GlobalPooling->Dropout(可选)->Linear->Activation(可选)层组成,可以通过aux_params配置如下:

aux_params=dict(
    pooling='avg',             # one of 'avg', 'max'
    dropout=0.5,               # dropout ratio, default is None
    activation='sigmoid',      # activation function, default is None
    classes=4,                 # define number of output labels
)
model = smp.Unet('resnet34', classes=4, aux_params=aux_params)
mask, label = model(x)

因此,可能的解决方案,或者至少是解决方法,是创建带有标签的新 Epoch 类:

class TrainEpochWithAUX(train.Epoch):
    def __init__(self, model, loss, metrics, optimizer, device="cpu", verbose=True):
        super().__init__(
            model=model,
            loss=loss,
            metrics=metrics,
            stage_name="train",
            device=device,
            verbose=verbose,
        )
        self.optimizer = optimizer

    def on_epoch_start(self):
        self.model.train()

    def batch_update(self, x, y):
        self.optimizer.zero_grad()
        prediction, label = self.model.forward(x) # added label here
        loss = self.loss(prediction, y)
        loss.backward()
        self.optimizer.step()
        return loss, prediction
© www.soinside.com 2019 - 2024. All rights reserved.