我的点云是错误的。如何使两个图像的两个点云集重叠?

问题描述 投票:0回答:1

我正在使用 zed m 相机拍摄图像/深度并将其转换为 3D 点云。我使用自己的算法通过内部和外部矩阵生成 3d 点云(而不是使用内置函数)。我正在尝试生成由 zed m 的左摄像头拍摄的两张图像的两个点云集。 但输出点云不重叠;如果我的算法(将 2d 图像转换为 3d 点云)、外部矩阵和内部矩阵正确,则两个点云集应该相互重叠。 例如,我拍摄一张照片,然后旋转相机(沿 z 轴)并拍摄另一张照片。这是我将这两个 2d 图像转换为 3d 点云后的输出点云。 image

我的外在矩阵有错吗?

以下代码片段具有捕获图像的基本设置以及如何获取外部矩阵。

# Basic Setup

zed = sl.Camera()

# Create a InitParameters object and set configuration parameters
init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720  # Use HD720 video mode (default fps: 60)
# Use a right-handed Y-up coordinate system
init_params.coordinate_system = sl.COORDINATE_SYSTEM.RIGHT_HANDED_Y_UP
init_params.coordinate_units = sl.UNIT.CENTIMETER  # Set units in meters

# Open the camera
err = zed.open(init_params)
if err != sl.ERROR_CODE.SUCCESS:
    exit(1)

# Enable positional tracking with default parameters
py_transform = sl.Transform()  # First create a Transform object for TrackingParameters object
tracking_parameters = sl.PositionalTrackingParameters(_init_pos=py_transform)
err = zed.enable_positional_tracking(tracking_parameters)
if err != sl.ERROR_CODE.SUCCESS:
    exit(1)

# Track the camera position during 1000 frames
i = 0
zed_pose = sl.Pose()  # the pose containing the position of the camera and other information (timestamp, confidence)

zed_sensors = sl.SensorsData()
runtime_parameters = sl.RuntimeParameters()
runtime_parameters.sensing_mode = sl.SENSING_MODE.FILL
# runtime_parameters.sensing_mode = sl.SENSING_MODE.STANDARD

# images will be saved here
image_l = sl.Mat(zed.get_camera_information().camera_resolution.width,
                 zed.get_camera_information().camera_resolution.height, sl.MAT_TYPE.U8_C4)
image_r = sl.Mat(zed.get_camera_information().camera_resolution.width,
                 zed.get_camera_information().camera_resolution.height, sl.MAT_TYPE.U8_C4)

depth_l = sl.Mat()



while i < 10:
    if zed.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
        # Get the pose of the left eye of the camera with reference to the world frame
        zed.get_position(zed_pose, sl.REFERENCE_FRAME.WORLD) 
        # position of the camera
        zed.get_sensors_data(zed_sensors, sl.TIME_REFERENCE.IMAGE)
        zed_imu = zed_sensors.get_imu_data()

        # A new image is available if grab() returns SUCCESS
        zed.retrieve_image(image_l, sl.VIEW.LEFT)
        zed.retrieve_image(image_r, sl.VIEW.RIGHT)

        image_zed_l = image_l.get_data()
        image_zed_r = image_r.get_data()

        # get depth map
        zed.retrieve_measure(depth_l, sl.MEASURE.DEPTH)

        # do something...

        # The following code is my extrinsic matrix of the left camera.
        
        R = zed_pose.get_rotation_matrix(sl.Rotation()).r
        t = zed_pose.get_translation(sl.Translation()).get()
    
        world2cam_left = np.hstack((R.T, np.dot(-R.T, t).reshape(3, -1))) # -t bc t they give me is not really
        # translation of ext matrix. also np.dot(-R.T, -t) is for inversing ext mat.
        world2cam_left = np.vstack((world2cam_left, np.array([0, 0, 0, 1])))

point-clouds projection-matrix
1个回答
0
投票

您也可以分享输入图像吗?

© www.soinside.com 2019 - 2024. All rights reserved.