如何使用opencv立体相机三角测量找到特征的世界位置?

问题描述 投票:0回答:1

我有两个摄像头和一个由地板上贴的贴纸定义的坐标系。我正在尝试使用 opencv 立体相机设置来校准每个相机,然后找到两个相机图像中存在的特征的世界毫米位置。下面是每个相机拍摄的坐标系和要查找的特征的图像。

  • 黄色圆圈贴纸:位于世界原点
  • 绿色圆圈贴纸:位于世界 (500mm, 0, 0)
  • 红色圆圈贴纸:位于世界(0, 500mm, 0)
  • 非常小的黄色贴纸:位于世界(151mm,194mm,0)

还有一个非常小的红色贴纸,这是我试图找到其世界毫米位置的功能。这个非常小的红色贴纸位于相机 1 的图像中的像素位置 (1175, 440) 和相机 2 的图像中的像素位置 (1401, 298)。

我的基本方法是为每个相机调用 cv::calibrateCamera,然后调用 cv::stereoCalibrate,然后调用 cv::stereoRectify,最后调用 cv::triangulatePoints 来获取特征的世界毫米位置。我得到了答案,但这似乎是无稽之谈。下面是我正在使用的代码。

// The four real world calibration points, expressed in millimeters
// The yellow circle is the origin
// The green circle is on the x-axis at 500mm away from the origin
// The red circle is on the y-axis at 500mm away from the origin
// The fourth calibration point is a small yellow sticker at (151mm, 194mm, 0mm)
// The z-axis is up, perpendicular to the floor
std::vector<cv::Point3f> _ObjectPointsWorldMM;
_ObjectPointsWorldMM.push_back(cv::Point3f(0.0f, 0.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(500.0f, 0.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(0.0f, 500.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(151.0f, 194.0f, 0.0f));
std::vector<std::vector<cv::Point3f>> ObjectPointsWorldMM;
ObjectPointsWorldMM.push_back(_ObjectPointsWorldMM);

//
// Camera 1 calibrateCamera()
//

// Get the camera 1 image
cv::Mat mCamera1Image = cv::imread(std::string("Camera1.jpg"), CV_LOAD_IMAGE_COLOR);

// The pixel locations in the camera 1 image that correspond to the four calibration points described above
std::vector<cv::Point2f> _Camera1ImagePointsPx;
_Camera1ImagePointsPx.push_back(cv::Point2f(791.0f, 220.0f)); // Corresponds to yellow origin sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(864.0f, 643.0f)); // Corresponds to green x=500mm sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(1277.0f, 113.0f)); // Corresponds to red y=500mm sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(1010.0f, 287.0f)); // Corresponds to small yellow sticker at fourth calibration point (see above)
std::vector<std::vector<cv::Point2f>> Camera1ImagePointsPx;
Camera1ImagePointsPx.push_back(_Camera1ImagePointsPx);

// Calibrate camera 1
cv::Mat mCamera1IntrinsicMatrix;
cv::Mat mCamera1DistortionCoefficients;
std::vector<cv::Mat> Camera1RotationVecs;
std::vector<cv::Mat> Camera1TranslationVecs;
const double dCamera1RMSReProjectionError = cv::calibrateCamera(
    ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
    Camera1ImagePointsPx, // In: The pixel locations in the camera 1 image that correspond to the four calibration points
    mCamera1Image.size(), // In: The size of the camera 1 calibration image
    mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // Out: The camera intrinsic matrix and distortion coefficients
    Camera1RotationVecs, Camera1TranslationVecs // Out: The camera rotation and translation vectors
    );

//
// Camera 2 calibrateCamera()
//

// Get the camera 2 image
cv::Mat mCamera2Image = cv::imread(std::string("Camera2.jpg"), CV_LOAD_IMAGE_COLOR);

// Check assumptions
assert((mCamera1Image.size() == mCamera2Image.size()));

// The pixel locations in the camera 2 image that correspond to the four calibration points described above
std::vector<cv::Point2f> _Camera2ImagePointsPx;
_Camera2ImagePointsPx.push_back(cv::Point2f(899.0f, 439.0f)); // Corresponds to yellow origin sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1472.0f, 608.0f)); // Corresponds to green x=500mm sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1101.0f, 74.0f)); // Corresponds to red y=500mm sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1136.0f, 322.0f)); // Corresponds to small yellow sticker at fourth calibration point (see above)
std::vector<std::vector<cv::Point2f>> Camera2ImagePointsPx;
Camera2ImagePointsPx.push_back(_Camera2ImagePointsPx);

// Calibrate camera 2
cv::Mat mCamera2IntrinsicMatrix;
cv::Mat mCamera2DistortionCoefficients;
std::vector<cv::Mat> Camera2RotationVecs;
std::vector<cv::Mat> Camera2TranslationVecs;
const double dCamera2RMSReProjectionError = cv::calibrateCamera(
    ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
    Camera2ImagePointsPx, // In: The pixel locations in the camera 2 image that correspond to the four calibration points
    mCamera2Image.size(), // In: The size of the camera 2 calibration image
    mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // Out: The camera intrinsic matrix and distortion coefficients
    Camera2RotationVecs, Camera2TranslationVecs // Out: The camera rotation and translation vectors
    );

//
// stereoCalibrate()
//

// Calibrate the stereo camera set up
cv::Mat InterCameraRotationMatrix, InterCameraTranslationMatrix;
cv::Mat InterCameraEssentialMatrix, InterCameraFundamentalMatrix;
const double dStereoCalReProjectionError = cv::stereoCalibrate(
    ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
    Camera1ImagePointsPx, // In: The pixel locations in the camera 1 image that correspond to the four calibration points
    Camera2ImagePointsPx, // In: The pixel locations in the camera 2 image that correspond to the four calibration points
    mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // In: Camera 1's intrinsic matrix and distortion coefficients
    mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // In: Camera 2's intrinsic matrix and distortion coefficients
    mCamera1Image.size(), // In: The size of each image
    InterCameraRotationMatrix, InterCameraTranslationMatrix, // Out: The inter-camera rotation and translation matrices
    InterCameraEssentialMatrix, InterCameraFundamentalMatrix // Out: The inter-camera essential and fundamental matrices
    );

//
// stereoRectify()
//

// Compute rectification transforms for each head of the calibrated stereo camera
cv::Mat Camera1RectificationTransform, Camera2RectificationTransform;
cv::Mat Camera1ProjectionMatrix, Camera2ProjectionMatrix;
cv::Mat DisparityToDepthMappingMatrix;
cv::stereoRectify(
    mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // In: Camera 1's intrinsic matrix and distortion coefficients
    mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // In: Camera 2's intrinsic matrix and distortion coefficients
    mCamera1Image.size(), // In: The size of each image
    InterCameraRotationMatrix, InterCameraTranslationMatrix, // In: The inter-camera rotation and translation matrices
    Camera1RectificationTransform, Camera2RectificationTransform, // Out: The 3x3 rectification transforms for each camera
    Camera1ProjectionMatrix, Camera2ProjectionMatrix, // Out: The 3x4 projection matrices for each camera
    DisparityToDepthMappingMatrix // Out: The 4x4 disparity-to-depth mapping matrix
    );

//
// triangulatePoints()
//

// The pixel location of each feature to find in the camera 1 image
std::vector<cv::Point2f> FeaturePointsCamera1;
FeaturePointsCamera1.push_back(cv::Point2f(1175.0f, 440.0f)); // The pixel location of the small red sticker

// The pixel location of each feature to find in the camera 2 image
std::vector<cv::Point2f> FeaturePointsCamera2;
FeaturePointsCamera2.push_back(cv::Point2f(1401.0f, 298.0f)); // The pixel location of the small red sticker

// Perform triangulation to find the feature(s)
cv::Mat FeatureLocationHomog;
cv::triangulatePoints(
    Camera1ProjectionMatrix, Camera2ProjectionMatrix, // In: The 3x4 projection matrices for each camera
    FeaturePointsCamera1, FeaturePointsCamera2, // In: The feature pixel points for each camera
    FeatureLocationHomog // Out: The reconstructed feature locations in homogeneous coordinates
    );

// Check assumptions
assert((FeatureLocationHomog.cols == static_cast<int>(FeaturePointsCamera1.size())));

我得到的齐次坐标是:

[-0.60382962,-0.0076272688,0.79707688,0.00036873418]

我对齐次坐标的理解是,将 [a, b, c, d] 转换为非齐次坐标,答案将是 [a/d, b/d, c/d],因此我的齐次向量将是:

[-1637.574,-20.685,2161.657]

这显然是不正确的。有人能指出我正确的方向吗?

c++ opencv computer-vision camera-calibration triangulation
1个回答
0
投票

我刚刚发现出了什么问题。我应该这样做,而不是从stereoRectify获取每个相机的投影矩阵:

cv::Mat Camera1RotationMatrix, Camera2RotationMatrix;
cv::Rodrigues(Camera1RotationVecs[0], Camera1RotationMatrix);
cv::Rodrigues(Camera2RotationVecs[0], Camera2RotationMatrix);

cv::Mat Camera1RTMat, Camera2RTMat;
cv::hconcat(Camera1RotationMatrix, Camera1TranslationVecs[0], Camera1RTMat);
cv::hconcat(Camera2RotationMatrix, Camera2TranslationVecs[0], Camera1RTMat);

const cv::Mat Camera1ProjectionMatrix = (Camera1IntrinsicMatrix * Camera1RTMat);
const cv::Mat Camera2ProjectionMatrix = (Camera2IntrinsicMatrix * Camera2RTMat);

然后,我可以在调用 cv::triangulatePoints 时使用每个投影矩阵,就像我原来的帖子中一样。希望这对其他人有帮助。

© www.soinside.com 2019 - 2024. All rights reserved.