2D 到 3D 投影 opencv 给出高误差

问题描述 投票:0回答:0

我的项目的目的是将一些2D/图像点转换成3D/世界坐标。

为了确定图像坐标,我绘制了图像在现实世界中以米/厘米为单位的确切位置。请看下图,

图像上的所有图像点都以红色突出显示

我们假设表面是平坦的,因此在世界坐标中 z=0。这允许我们在 3D 空间中投影 x 和 y,同时忽略 Z 轴。

我使用以下代码,主要由 OpenCV 函数组成。

import numpy as np
import cv2

# calibratiob done with cv2 calibtrate to get cam matrix and distortion params
dist_params =  np.array([-2.80467218e-01, 6.67589890e-02, 9.79684e-05, 7.560530e-04, 0])
cam_matrix = np.array([
    [880.27, 0, 804.05388],
    [0.0, 877.2202, 431.85688],
    [0.0, 0.0, 1.0],
    ])
        
# values are in meters
world_points_real = np.array([ # x, y, z
                        
    [4.92,   0.0, 0.0],    
    [4.92,  -1.2, 0.0],  
    [4.92, -2.44, 0.0],
    [4.92, -4.87, 0.0],
    [4.62,  5.66, 0.0],
], dtype=np.float32).reshape((-1,3))


img_points =  np.array([ # u, v
    [  932, 587],
    [ 1068, 593],
    [ 1196, 593],
    [ 1313, 595],
    [  305, 537],
], dtype=np.float32).reshape((-1,2))



# find rvecs and tvecs using OpenCV solvePnP methods
ret, rvecs, tvecs, dist = cv2.solvePnPRansac(world_points_real, img_points, cam_matrix, dist_params)#, flags=cv2.SOLVEPNP_ITERATIVE)


# # project 3d points to 2d
img_points_project, jac = cv2.projectPoints(np.array([4.62,  5.66, 0.0]), rvecs, tvecs, cam_matrix, dist_params)
print("img_points:", img_points_project) # this should be [  305, 537]

# gives  (-215:Assertion failed) Q.size() == Size(4,4) 
# cv2.reprojectImageTo3D(img_points, r_and_t_vec)


# We assume a flat surface; ie z=0, to do 2d to 3d projection.                       

# Convert redian to Rodrigues
rvecs_rod, _ = cv2.Rodrigues(rvecs)

# create (3,4) shaped r&t_vec
r_and_t_vec = np.zeros((3,4))
r_and_t_vec[:,:-1] = rvecs_rod
r_and_t_vec[:,3] = tvecs.reshape(-1)


# find scaling factor 
# r and t vector times any world coordinate point [x, y, z, 1]
sacling_factor = np.dot(r_and_t_vec, np.array([4.92,   0.0, 0.0 ,1]).reshape((4,1)))


#drop r3
r_and_t_vec_nor3 = np.delete(r_and_t_vec,2,1) # since z = 0, we take out r3

for i in range(len(img_points)):
    
    # 2D points
    uv1 = np.array([img_points[i][0], img_points[i][1], 1])
    
    # Homography matrix
    mat2 = np.dot(cam_matrix, r_and_t_vec_nor3)
    
    # Inverse it 
    inv_mat2 = np.linalg.inv(mat2) 
    
    # multiply with uv1
    result2 = np.dot(inv_mat2, uv1) * sacling_factor[2]
    
    print("wprld_points:", result2) # this should be same as img_points

然而,投影偏移了2-3米。 ChatGpt 解决方案只能使用 cv2.projectpoints 进行 3D 到 2D 投影。

我尝试使用 CV2 solvepnp 来获取 Rvecs 和 Tvecs。然后用它们生成 (3, 4) 投影矩阵。我使用投影矩阵的逆来获得给定 2D 图像点的 3D 投影

我期待投影的 3D 点接近

world_points_real
但它们相距 2-3 米。我尝试了更多点,但没有任何改进。错误来自哪里?

python opencv computer-vision camera-calibration robotics
© www.soinside.com 2019 - 2024. All rights reserved.