三角点被移动和旋转但在其他方面是合理的

问题描述 投票:0回答:0

我有两个不同的足球比赛视图,我在两个视图上使用 2d 姿势估计器来获得 2d 姿势,然后我在两个不同的相机中映射不同的 2d 姿势(目前是手动的),这样我就知道相应的姿势。我认为 3d 重建看起来非常合理,但是它在空间中旋转和移动,我无法弄清楚是什么原因造成的。

这是我的三角测量图像:

My triangulated pose

为了校准相机,我在球场上标记了已知点,例如角落、罚球点等,主要是平面点,但我也使用球门的十字来获得一些非平面点。我尝试了多种不同的方法来计算估计的内在参数,我得到的最好结果是使用平面点通过 cameraCalibrate 函数来估计内在矩阵。

def new_guess(self, image_points, real_worlds):
        planar_img, planar_3d = Homo_est.get_planar_points(image_points, real_worlds)
        mat, ds = self.create_camera_guess(planar_img, planar_3d)
        real_worlds = np.array([list(tup) for tup in real_worlds])
        image_points =[list(tup) for tup in image_points]
        objPts = []
        imgPts = []
        
        objPts.append(real_worlds)
        imgPts.append(image_points)
        objPts = np.float32(objPts)
        imgPts = np.float32(imgPts)
        dist_coeffs = np.zeros((4,1)) 
        gray = cv2.cvtColor(self.img, cv2.COLOR_BGR2GRAY)
        #mat = self.second_camera_guess() # bästa so far
        criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30000, 0.0001) # att ha 
        ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objPts, imgPts, gray.shape[::-1], mat,             ds, flags=cv2.CALIB_USE_INTRINSIC_GUESS, criteria=criteria)
        return mtx, dist


new_guess 函数本质上是计算我的内在参数。然后我使用计算出的内在矩阵作为 solvePNP 函数的输入,我在其中计算我的 P 矩阵。我使用 cv2.projectPoints() 验证结果,其中我有最多 10 个像素的错误,而且大多数时间更少,我认为这是准确的?

def calibrate_solvePNP(self, image_points, real_worlds):
        planar_img, planar_3d = Homo_est.get_planar_points(image_points, real_worlds)    
        mat, dist = self.new_guess(image_points, real_worlds)
        real_worlds = np.array([list(tup) for tup in real_worlds])
        image_points =[list(tup) for tup in image_points]
        objPts = []
        imgPts = []        
        objPts.append(real_worlds)
        imgPts.append(image_points)
        objPts = np.float32(objPts)
        imgPts = np.float32(imgPts)
        dist_coeffs = np.zeros((4,1)) 
        gray = cv2.cvtColor(self.img, cv2.COLOR_BGR2GRAY)
        (succsess, rvec, t) = cv2.solvePnP(objPts, imgPts, mat, dist_coeffs)
        res, jac = cv2.projectPoints(real_worlds, rvec, t, mat, dist_coeffs)

        R, jac = cv2.Rodrigues(rvec)
        Rt = np.concatenate([R,t], axis=-1) # [R|t]
        P = np.matmul(mat, Rt)
        return P 

在此之后,我对我的观点进行了三角测量:

def triangulate_pts(self):
        P1_points = Homo_est.player_view1() # pose coordinates view 1
        P2_points = Homo_est.player_view2() # pose coordinates view 2
        homo = Homo_est()
        P1, P2, K1, K2, d1, d2 = homo.generate_cameras() # essentially calls my calibrate_solvePNP # and new_guess to create these values
        dist_coeffs = np.zeros((4,1)) 
        P1_new = np.matmul(np.linalg.inv(K1), P1) # unsure if this is good or bad
        P2_new = np.matmul(np.linalg.inv(K2), P2)

        # doesn't seem to make a difference using d1 or d2
        P1_undist = cv2.undistortPoints(P1_points, cameraMatrix=K1, distCoeffs=dist_coeffs)
        
        P2_undist = cv2.undistortPoints(P2_points, 
                                   cameraMatrix=K2,
                                   distCoeffs=dist_coeffs)
        
        triangulation = cv2.triangulatePoints(P1_new, P2_new, P1_undist, P2_undist)

        homog_points = triangulation.transpose()
        
        euclid_points = cv2.convertPointsFromHomogeneous(homog_points)

euclid_points 是我用来绘制和绘制 3d 姿势的东西!

我试过各种可能的失真系数,它似乎并没有太大的区别。我尝试标准化一台相机,这样我就有一台相机作为 [I 0] 并转换另一台相机,但没有用,但这次尝试可能有些不准确。我已经遵循了所有步骤(除非我犯了一些我无法发现的错误):cv2.triangulatePoints 不是很准确吗?

我基本上已经尝试了所有我能想到的方法,直觉上我觉得必须有某种方法来撤消这种旋转和移动,因为这些点确实类似于我正在估计的姿势。在最坏的情况下,仅仅能够摆脱旋转就足够了,因为我对场上的点有准确的单应性!任何人都知道可能是什么原因?

python opencv computer-vision triangulation pose-estimation
© www.soinside.com 2019 - 2024. All rights reserved.