如何从 yolo8 分割结果创建二进制掩码

问题描述 投票:0回答:2

我想使用 yolo8 分割图像,然后为图像中具有特定类别的所有对象创建一个掩码。

我开发了这段代码:

img=cv2.imread('images/bus.jpg')
model = YOLO('yolov8m-seg.pt')
results = model.predict(source=img.copy(), save=False, save_txt=False)
class_ids = np.array(results[0].boxes.cls.cpu(), dtype="int")
for i in range(len(class_ids)):
    if class_ids[i]==0:
         empty_image = np.zeros((height, width,3), dtype=np.uint8)
         res_plotted = results[0][i].plot(boxes=0, img=empty_image)

在上面的代码中,

res_plotted
是一个对象的遮罩,采用RGB格式。我想将所有这些图像相互添加,并为所有 0 类对象创建一个掩码(在本例中是行人)

我的问题:

  1. 我怎样才能完成这个代码?
  2. 有没有更好的方法可以在没有循环的情况下实现这一点?
  3. yolo8库中有任何实用函数可以做到这一点吗?
python image-segmentation semantic-segmentation yolov8
2个回答
2
投票

使用 bbox 类提取人员细分。你会得到一个形状为

[channels, w, h]
的数组。然后,您可以在通道维度(等于人数)上使用
any
将多通道数组展平为单通道数组。

import cv2
from ultralytics import YOLO
import numpy as np
import torch


img= cv2.imread('ultralytics/assets/bus.jpg')
model = YOLO('yolov8m-seg.pt')
results = model.predict(source=img.copy(), save=True, save_txt=False, stream=True)
for result in results:
    # get array results
    masks = result.masks.masks
    boxes = result.boxes.boxes
    # extract classes
    clss = boxes[:, 5]
    # get indices of results where class is 0 (people in COCO)
    people_indices = torch.where(clss == 0)
    # use these indices to extract the relevant masks
    people_masks = masks[people_indices]
    # scale for visualizing results
    people_mask = torch.any(people_masks, dim=0).int() * 255
    # save to file
    cv2.imwrite(str(model.predictor.save_dir / 'merged_segs.jpg'), people_mask.cpu().numpy())

输入 w bbox 和分段/输出:

所有内容均在 GPU 上通过内部火炬操作进行计算,以实现最佳性能


0
投票

这是我用来提取蒙版的代码。查看代码中的注释。欢迎任何改进!请在下面评论。

from ultralytics import YOLO
import cv2
import torch
from pathlib import Path

# Load a pretrained YOLOv8n-seg Segment model
model = YOLO("./weights/best.pt")

# Run inference on an image
results = model('./images/img (1).jpg')  # results list

result = results[0]

print(result.names)
# print(result.boxes.xyxy)
# print(result.boxes.conf)
# print(result.boxes.cls)
# print(result.masks.data)

Path("./test_output/").mkdir(parents=True, exist_ok=True)

cv2.imwrite(f"./test_output/original_image.jpg", result.orig_img)

seg_classes = list(result.names.values())
# seg_classes = ["door", "insulator", "wall", "window"]

for result in results:

    masks = result.masks.data
    boxes = result.boxes.data

    clss = boxes[:, 5]
    print("clss")
    print(clss)

    #EXTRACT A SINGLE MASK WITH ALL THE CLASSES
    obj_indices = torch.where(clss != -1)
    obj_masks = masks[obj_indices]
    obj_mask = torch.any(obj_masks, dim=0).int() * 255
    cv2.imwrite(str(f'./test_output/all-masks.jpg'), obj_mask.cpu().numpy())

    #MASK OF ALL INSTANCES OF A CLASS
    for i, seg_class in enumerate(seg_classes):

        obj_indices = torch.where(clss == i)
        print("obj_indices")
        print(obj_indices)
        obj_masks = masks[obj_indices]
        obj_mask = torch.any(obj_masks, dim=0).int() * 255

        cv2.imwrite(str(f'./test_output/{seg_class}s.jpg'), obj_mask.cpu().numpy())

        #MASK FOR EACH INSTANCE OF A CLASS
        for i, obj_index in enumerate(obj_indices[0].numpy()):
            obj_masks = masks[torch.tensor([obj_index])]
            obj_mask = torch.any(obj_masks, dim=0).int() * 255
            cv2.imwrite(str(f'./test_output/{seg_class}_{i}.jpg'), obj_mask.cpu().numpy())
© www.soinside.com 2019 - 2024. All rights reserved.