在检查了几段代码之后,我拍摄了几张照片,找到了棋盘角落并使用它们来获取相机矩阵,失真系数,旋转和平移向量。现在,有人能告诉我从2D图像计算现实世界中的距离需要哪个python opencv函数吗?项目要点?例如,使用棋盘作为参考(参见图片),如果瓷砖尺寸为5厘米,则4个瓷砖的距离应为20厘米。我看到了一些函数,比如projectPoints,findHomography,solvePnP但是我不确定我需要哪一个来解决我的问题并获得相机世界和棋盘世界之间的转换矩阵。 1个单摄像头,所有情况下相机的相同位置,但不完全在棋盘上方,棋盘放置在平面物体上(桌子)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((nx * ny, 3), np.float32)
objp[:, :2] = np.mgrid[0:nx, 0:ny].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob(path.join(calib_images_dir, 'calibration*.jpg'))
print(images)
# Step through the list and search for chessboard corners
for filename in images:
img = cv2.imread(filename)
imgScale = 0.5
newX,newY = img.shape[1]*imgScale, img.shape[0]*imgScale
res = cv2.resize(img,(int(newX),int(newY)))
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
pattern_found, corners = cv2.findChessboardCorners(gray, (nx,ny), None)
# If found, add object points, image points (after refining them)
if pattern_found is True:
objpoints.append(objp)
# Increase accuracy using subpixel corner refinement
cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1 ))
imgpoints.append(corners)
if verbose:
# Draw and display the corners
draw = cv2.drawChessboardCorners(res, (nx, ny), corners, pattern_found)
cv2.imshow('img',draw)
cv2.waitKey(500)
if verbose:
cv2.destroyAllWindows()
#Now we have our object points and image points, we are ready to go for calibration
# Get the camera matrix, distortion coefficients, rotation and translation vectors
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(mtx)
print(dist)
print('rvecs:', type(rvecs),' ',len(rvecs),' ',rvecs)
print('tvecs:', type(tvecs),' ',len(tvecs),' ',tvecs)
mean_error = 0
for i in range(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print("total error: ", mean_error/len(objpoints))
imagePoints,jacobian = cv2.projectPoints(objpoints[0], rvecs[0], tvecs[0], mtx, dist)
print('Image points: ',imagePoints)
你确实是对的,我认为你应该使用solvePnP解决这个问题。 (阅读更多关于透视点问题的信息:https://en.wikipedia.org/wiki/Perspective-n-Point。)
Python OpenCV solvePnP函数采用以下参数并返回输出旋转和输出平移向量,该向量将模型坐标系转换为摄像机坐标系。
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
在你的情况下,imagePoints将是棋盘的角落,所以它看起来像:
ret, rvec, tvec = cv2.solvePnP(objpoints, corners, mtx, dist)
使用返回的平移向量,您可以计算从摄像机到棋盘的距离。 solvePnP的输出转换与objectPoints中指定的单位相同。
最后,您可以计算从tvec到欧氏距离的实际距离:
d = math.sqrt(tx*tx + ty*ty + tz*tz).
您的问题主要涉及相机校准,尤其是在opencv中解析相机distortion的实施不佳。您必须通过在棋盘的不同坐标中进行一些距离探测来近似相机镜头的失真功能。好主意是在len的中心开始第一个小距离,然后一个方格远远超过第二个距离并重复操作到边界。它将为您提供失真函数的系数。 Matlab has own library以高精度解决您的问题,不幸的是它非常昂贵。 根据:
现在,有人可以告诉我从2D图像计算现实世界中的距离需要哪个python opencv函数吗?
我认为this article对python opencv函数集进行了很好的表达,以生成真正的度量。如上所述,通过解析系数,您可以获得良好的准确性。无论如何,我不这么认为,如果它是一个像开源函数的开源实现
cv2.GetRealDistance(...)