分段任务中的掩码值是0或255。如何解决此问题?

问题描述 投票:0回答:1

fast-ai库肯定会出现严重错误,因为我似乎是唯一遇到此问题的人。每当我尝试学习速率查找器或培训网络时,都会出现错误。我花了一个星期的时间来生成此特定的错误消息,这使我检查了掩码值。事实证明,背景像素为0,前景像素为255。这是一个问题,因为我只有两个班。如何在Databunch对象中将255个值更改为1?有没有一种方法可以将每个掩码值除以255,还是需要以某种方式事先进行设置?我有点迷失在这个过程中。

这是我收到的错误消息:

LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.

---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)

    <ipython-input-20-c7a9c29f9dd1> in <module>()
    ----> 1 learn.lr_find()
          2 learn.recorder.plot()

    8 frames

    /usr/local/lib/python3.6/dist-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, wd)
         39     cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div)
         40     epochs = int(np.ceil(num_it/len(learn.data.train_dl)))
    ---> 41     learn.fit(epochs, start_lr, callbacks=[cb], wd=wd)
         42 
         43 def to_fp16(learn:Learner, loss_scale:float=None, max_noskip:int=1000, dynamic:bool=True, clip:float=None,

    /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
        198         else: self.opt.lr,self.opt.wd = lr,wd
        199         callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks)
    --> 200         fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
        201 
        202     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

    /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, learn, callbacks, metrics)
         99             for xb,yb in progress_bar(learn.data.train_dl, parent=pbar):
        100                 xb, yb = cb_handler.on_batch_begin(xb, yb)
    --> 101                 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
        102                 if cb_handler.on_batch_end(loss): break
        103 

    /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
         28 
         29     if not loss_func: return to_detach(out), to_detach(yb[0])
    ---> 30     loss = loss_func(out, *yb)
         31 
         32     if opt is not None:

    /usr/local/lib/python3.6/dist-packages/fastai/layers.py in __call__(self, input, target, **kwargs)
        241         if self.floatify: target = target.float()
        242         input = input.view(-1,input.shape[-1]) if self.is_2d else input.view(-1)
    --> 243         return self.func.__call__(input, target.view(-1), **kwargs)
        244 
        245 def CrossEntropyFlat(*args, axis:int=-1, **kwargs):

    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        539             result = self._slow_forward(*input, **kwargs)
        540         else:
    --> 541             result = self.forward(*input, **kwargs)
        542         for hook in self._forward_hooks.values():
        543             hook_result = hook(self, input, result)

    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
        914     def forward(self, input, target):
        915         return F.cross_entropy(input, target, weight=self.weight,
    --> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
        917 
        918 

    /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
       2007     if size_average is not None or reduce is not None:
       2008         reduction = _Reduction.legacy_get_string(size_average, reduce)
    -> 2009     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
       2010 
       2011 

    /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
       1836                          .format(input.size(0), target.size(0)))
       1837     if dim == 2:
    -> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
       1839     elif dim == 4:
       1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

    RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed.  at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97

这也是我设置数据的方式:

    data = (SegmentationItemList.from_df(img_df,IMAGE_PATH)
          # import from df in greyscale ('L')
          .split_by_rand_pct(valid_pct=0.15)
          # 1/15 train/validation split
          .label_from_func(get_mask, classes = array(['background','cell']))
          # segmentation mask and classes
          .transform(tfms, tfm_y=True, size=TILE_SHAPE)
          # apply data augmentation
          .databunch(bs=BATCH_SIZE)
          # set batchsize
          .normalize()
    )

请告诉我您是否需要更多信息。我已经尝试过添加“ after_open”函数,该函数应将“ 255”除以“ label_from_func”部分。我也知道fast-ai的'open_image'函数中有一个div属性,该属性应该将0到1之间的RGB值归一化,但我找不到'label_from_func'的RGB属性。

编辑:

我在fastai社区发现了this post。但是,即使有了这些答案,我也无法解决我的问题。我尝试添加此代码段以将div = True传递到open_mask函数中,但是它不起作用:

    src.train.y.create_func = partial(open_mask, div=True)
    src.valid.y.create_func = partial(open_mask, div=True)

我在.set_attr(mask_opener=partial(open_mask, div=True))之后也尝试了.label_from_func(),但随后抛出此属性错误:AttributeError: setattr

仍然需要帮助

pytorch mask image-segmentation fast-ai
1个回答
0
投票

下面的自定义类是处理使用0和255编码蒙版的二进制图像分割数据集所必需的>>

class SegLabelListCustom(SegmentationLabelList):
        def open(self, fn): return open_mask(fn, div=True)

class SegItemListCustom(SegmentationItemList):
    _label_cls = SegLabelListCustom

参考链接:https://github.com/fastai/fastai/issues/1540

下面是使用这些自定义类为数据束创建源的示例。

 src = (SegItemListCustom.from_folder('/home/jupyter/AerialImageDataset/train/')
          .split_by_folder(train='images', valid='validate')
          .label_from_func(get_y_fn, classes=labels))

我真的希望这能对您有所帮助,因为我不久前仍在努力应对自己的问题,这就是我的解决方案。这很困难,因为我发现的许多答案都是针对先前版本的,不再有效。

让我知道您是否需要更多说明或帮助,因为我知道早日陷入困境可能会令人沮丧。

© www.soinside.com 2019 - 2024. All rights reserved.