我有两个摄像头设置,我使用 opencv 三角测量来获取从两个摄像头看到的物体的 3d 坐标,但我得到了一些奇怪的结果。
以下是我采取的步骤:
_, left_camera_intrinsic_matrix, left_camera_distortion_coefficients, _, _ = cv2.calibrateCamera(
obj_points_left, img_points_left, left_shape, None, None)
_, right_camera_intrinsic_matrix, right_camera_distortion_coefficients, _, _ = cv2.calibrateCamera(
obj_points_right, img_points_right, right_shape, None, None)
_, _, _, _, _, rot, trans, _, _ = cv2.stereoCalibrate(obj_points, img_points_left,
img_points_right, left_camera_intrinsic_matrix,
left_camera_distortion_coefficients,
right_camera_intrinsic_matrix,
right_camera_distortion_coefficients,
left_shape)
left_camera_rotation, _ = cv2.Rodrigues(rot)
left_camera_position = trans
right_camera_position = (0, 0, 0)
right_camera_rotation = (0, 0, 0)
right_projection_matrix, _ = cv2.Rodrigues(right_camera_rotation)
right_projection_matrix = np.hstack(
(right_projection_matrix, np.array(right_camera_position).reshape(-1, 1)))
left_projection_matrix = np.hstack((rot, left_camera_position.reshape(-1, 1)))
_, _, rectified_right_prj_mtx, rectified_left_prj_mtx, _, _, _ = \
cv2.stereoRectify(right_camera_intrinsic_matrix, right_camera_distortion_coefficients,
left_camera_intrinsic_matrix,
left_camera_distortion_coefficients, left_shape, rot,
left_camera_position)
right = cv2.undistortPoints(right_points, right_camera_intrinsic_matrix,
right_camera_distortion_coefficients, rectified_right_prj_mtx)
left = cv2.undistortPoints(left_points, left_camera_intrinsic_matrix,
left_camera_distortion_coefficients, rectified_left_prj_mtx)
# find 3d points through triangulation
homogeneous_points = \
cv2.triangulatePoints(right_projection_matrix, left_projection_matrix, right, left)
points_3d = cv2.convertPointsFromHomogeneous(homogeneous_points.T)
这几乎可以正常工作,但有一些问题。
这是一个 GIF 示例,其中我将棋盘 (6x4) 移向右侧相机然后返回。
一些补充说明:
到目前为止我已经尝试过(这些没有帮助,有些变得更糟):
已解决
所以我在上面的代码中有很多错误。正如用户 Micka 所说,stereoRectify 有点错误,实际上不需要。
为了以后帮助像我这样的初学者,我将我的工作代码合并到一个文件中并放在 github 上。像这样的一些示例代码可能会为我节省 40 个小时的调试和困惑,所以我真的希望它能帮助将来的人。