Opencv 图像处理

问题描述 投票:0回答:2

我正在尝试使用opencv处理图像

python opencv contour edge-detection
2个回答
0
投票

这是我的方法,首先读取图像并转换为灰度。使用 OTSU 阈值处理 来获取感兴趣的区域。之后,获取轮廓并获取最大面积,这应该对应于对象:

im = cv2.imread("example.png") # read the iamge
imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # convert to gray
imGray = cv2.equalizeHist(imGray) # equalize hist, maybe not necessary
imOTSU = cv2.threshold(imGray, 0, 255, cv2.THRESH_OTSU+cv2.THRESH_BINARY_INV)[1] # get otsu with inner as positive
imOTSUOpen =  cv2.morphologyEx(imOTSU, cv2.MORPH_OPEN, np.ones((3,3), np.uint8)) # open
contours, _ = cv2.findContours(imOTSUOpen, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # get contours
largestContour = max(contours, key = cv2.contourArea) # get the largest
# get X, Y coordinates
X, Y = largestContour.T
X = X[0]
Y = Y[0]

从这里开始,我尝试了分位数,并成功识别了左上角和右下角,这就是边界矩形所需的全部:

plt.figure() # new fiugre
plt.imshow(im) # show image
plt.axvline(min(X)) # draw verticle line at minimum x
plt.axhline(max(Y)) # draw horizontal line at minimum y
upperLeft = (int(np.quantile(X, 0.1)), int(np.quantile(Y, 0.25))) # get quantiles as corner
lowerRight = (int(np.quantile(X, 0.55)), int(np.quantile(Y, 0.9))) # get quantiles as corner
plt.scatter(upperLeft[0], upperLeft[1]) # scatter the corner
plt.scatter(lowerRight[0], lowerRight[1]) # scatter the corner

剧情是这样的:

现在你有了这个,绘制矩形就很容易了:

cv2.rectangle(im, (upperLeft[0], upperLeft[1]), (lowerRight[0], lowerRight[1]), (0, 255, 0), 2) # draw rectangle as green
cv2.imwrite("exampleContoured.png", im)

我仍然会检查堆栈,应该有很多突出轮廓的例子,并且肯定有更强大的方法来解决这个问题。

编辑 1:另一种变体,识别突出物并将其从蒙版中减去:

im = cv2.imread("example2.png") # read the iamge
imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # convert to gray
imGray = cv2.equalizeHist(imGray) # equalize hist, maybe not necessary
mask = imGray<10 # get pixels under 10
mask = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((5,5), np.uint8)) # open
mask = cv2.dilate(mask, np.ones((10,10), np.uint8)) # dilate
imOTSU = cv2.threshold(imGray, 0, 1, cv2.THRESH_OTSU+cv2.THRESH_BINARY_INV)[1] # get otsu with inner as positive
imOTSUOpen =  cv2.morphologyEx(imOTSU, cv2.MORPH_OPEN, np.ones((3,3), np.uint8)) # open
imOTSUOpen = np.clip(imOTSUOpen-mask, 0 , 1)
contours, _ = cv2.findContours(imOTSUOpen, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # get contours
largestContour = max(contours, key = cv2.contourArea) # get the largest
imContoured = cv2.drawContours(im.copy(), largestContour, -1, (0,255,0), 5) # draw contour
x, y, w, h = cv2.boundingRect(largestContour) # get bounding rect
cv2.rectangle(imContoured, (x, y), (x + w, y + h), (0, 0, 255), 2) # draw rectangle
cv2.imwrite("exampleContoured3.png", imContoured) # save image

结果如下所示,轮廓为绿色,边界矩形为红色:

关于代码及其工作原理:

  • 像以前一样通过 OTSU 获得阈值
  • 获取突出蒙版,因为它的值较低,因此与其他像素不同。做一些形态来强调该区域。

您将得到以下内容:

同样,这可能不是最好的方法。我不会查看 10-20 张图像并确保它适用于所有图像,因为您会发现未来的图像可能不起作用。使用我的答案作为您实际实施的基础,因为您拥有领域知识并且您知道您的设备和任务的局限性。


0
投票

好吧,这对你来说是一个奇怪的解决方案:

  • 首先,您需要使用

    cv2.THRESH_BINARY_INV
    代替
    cv2.THRESH_BINARY
    因为opencv检测轮廓 从黑到白的路过。

  • 其次,如果你用小内核关闭而不是用大内核打开,你会在这里得到更好的结果。

这是阈值图像:

这是内核大小为 11 的闭合图像:

这是内核大小为 33 的打开图像:

现在奇怪的部分开始了:

过滤图像后,由于噪声仍然存在缺陷(查看右上角)。在最终的过滤图像上,我找到了最大面积轮廓并得到了它的边界矩形。由于拐角处存在噪声,边界矩形不是完美拟合。为了更好地拟合它:

  • 选择一个边缘,取3个点,看看过滤后的图像中是否都是白色像素,如果不将边缘向中心移动并重复。
  • 对所有四个边重复此操作,您将得到更好的拟合矩形。

在此图像上,红色矩形是边界矩形,黄色是与您的骰子相对应的最大轮廓:

在此图像上,红色矩形又是边界矩形,绿色矩形是更合适的版本。

这是完整的代码:

import cv2


img = cv2.imread('o7oNM.png')

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 1)

ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
cv2.imshow('Thresholded',thresh)

thresh = cv2.morphologyEx(thresh,cv2.MORPH_CLOSE,cv2.getStructuringElement(cv2.MORPH_RECT,(11,11)))
cv2.imshow('Closed',thresh)

thresh = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,cv2.getStructuringElement(cv2.MORPH_RECT,(33,33)))
cv2.imshow('Opened',thresh)


contours,hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
max_cnt = max(contours,key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(max_cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),1) #old rect

#Improve top edge
print('Old y:',y)
while not (thresh[y+1][x+w//4] and thresh[y+1][x+2*w//4] and thresh[y+1][x+3*w//4]):
    y = y+1
print('New y:',y)

#Improve bot edge
print('Old h:',h)
while not (thresh[y+h-1][x+w//4] and thresh[y+h-1][x+2*w//4] and thresh[y+h-1][x+3*w//4]):
    h = h-1
print('New h:',h)

#Improve left edge
print('Old x:',x)
while not (thresh[y+h//4][x+1] and thresh[y+2*h//4][x+1] and thresh[y+3*h//4][x+1]):
    x = x+1
print('New x:',x)

#Improve right edge
print('Old w:',w)
while not (thresh[y+h//4][x+w-1] and thresh[y+2*h//4][x+w-1] and thresh[y+3*h//4][x+w-1]):
    w = w-1
print('New w:',w)

cv2.drawContours(img,[max_cnt],-1,(0,255,255))
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),1) #new rect
cv2.imshow('Image',img)
cv2.waitKey()
© www.soinside.com 2019 - 2024. All rights reserved.