我想比较两个图像并找到第二张图像中模板的位置,所以第一张图像是这样的:
但是当我运行代码时,我得到以下结果:
如何改善结果?预先感谢
import cv2
import numpy as np
from imutils.object_detection import non_max_suppression
import matplotlib.pyplot as plt
template =cv2.imread("euro.jpg",cv2.IMREAD_COLOR)
template =cv2.resize(template,(99,99))
img =cv2.imread("euro_2024.jpg",cv2.IMREAD_COLOR)
img2 = img.copy()
# image =cv2.resize(image,(700,700))
h,w =template.shape[:2]
# All the 6 methods for comparison in a list
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
for meth in methods:
img = img2.copy()
method = eval(meth)
# Apply template Matching
res = cv2.matchTemplate(img,template,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img,top_left, bottom_right, 255, 2)
plt.subplot(121),plt.imshow(res,cmap = 'gray')
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(img,cmap = 'gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.suptitle(meth)
plt.show()
这意味着匹配函数仅关注球,而不关注任何背景噪音。
但您还应该将模板转换为灰度,以使其更易于区分。您可以在“OpenCV-Python 教程/OpenCV 中的图像处理/模板匹配”中看到这一点:
template_gray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
。cropped_template = template_gray[y:y+h, x:x+w]
。
matchTemplate
可以将 mask 作为可选参数,这告诉它在匹配过程中仅考虑掩模的白色部分。
因此,您可以为裁剪后的模板创建一个遮罩,并将其传递给
matchTemplate
,以便在匹配过程中仅考虑球:mask = np.zeros(cropped_template.shape, dtype=np.uint8)
numpy.zeros
:``
它将创建一个与 template_gray
具有相同尺寸的零数组(表示转换为灰度的模板图像)。
在球所在的蒙版上画一个白色圆圈:
cv2.circle(mask, (mask.shape[1]//2, mask.shape[0]//2), radius, 255, -1)
radius
应设置为与模板图像中感兴趣对象的大小相匹配的值,并且 circle
函数将使用值 255
填充圆圈,在蒙版中创建一个白色区域。
您的代码将是:
import cv2
import numpy as np
import matplotlib.pyplot as plt
template = cv2.imread("path_to_cropped_template.jpg")
template_gray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
x, y, w, h = 100, 100, 50, 50 # Replace with actual values
cropped_template = template_gray[y:y+h, x:x+w]
# Create a mask for the cropped template
mask = np.zeros(cropped_template.shape, dtype=np.uint8)
cv2.circle(mask, (w//2, h//2), w//2, (255), -1)
search_img = cv2.imread("path_to_search_image.jpg")
search_img_gray = cv2.cvtColor(search_img, cv2.COLOR_BGR2GRAY)
res = cv2.matchTemplate(search_img_gray, cropped_template, cv2.TM_CCOEFF_NORMED, mask=mask)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(search_img, top_left, bottom_right, 255, 2)
plt.imshow(cv2.cvtColor(search_img, cv2.COLOR_BGR2RGB))
plt.show()
再次强调,裁剪部分中的
x, y, w, h
应该是实际模板图像中球的坐标和大小。
non_max_suppression
:
import cv2
import numpy as np
import matplotlib.pyplot as plt
from imutils.object_detection import non_max_suppression
template = cv2.imread('path_to_cropped_template.jpg', cv2.IMREAD_COLOR)
img = cv2.imread('path_to_search_image.jpg', cv2.IMREAD_COLOR)
template_gray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = np.zeros(template_gray.shape, dtype=np.uint8)
cv2.circle(mask, (mask.shape[1] // 2, mask.shape[0] // 2), mask.shape[1] // 2, 255, -1)
h, w = template_gray.shape[:2]
# Match template using the mask
res = cv2.matchTemplate(img_gray, template_gray, cv2.TM_CCOEFF_NORMED, mask=mask)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
top_left = min_loc if res.min() == min_val else max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img, top_left, bottom_right, (255, 0, 0), 2)
# Apply non-maximum suppression to the bounding boxes
rects = [[*top_left, *bottom_right]]
pick = non_max_suppression(np.array(rects))
for (startX, startY, endX, endY) in pick:
cv2.rectangle(img, (startX, startY), (endX, endY), (0, 255, 0), 2)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title('Detected Point')
plt.show()