Skip to content

How to get depth projection from Point cloud? #139

@IMasI2Cat

Description

@IMasI2Cat

Hi, for my research I would need to use this dense Point Cloud and project it to one of the original views (thus obtaining one depth map like one of the originally captured but denser (including scans from other frames)). However, I am trying to use the delivered data to do this projection but I am unable to do so. This is what I do with an example frame:

  • I load 0000003503_0000003724.ply from 2013_05_28_drive_0018_sync (frames from 3503 to 3724)

  • I open the vehicle poses file and load the row for frame 3503 (first column), I reshape (numpy's reshape) the rest of row to 3x4 matrix

  • I convert it to homogeneous, I invert it with numpy's linalg inv (as the matrix is from GPU/IMU of pose to world and I want world to pose) and multiply it to the point cloud point's (in homogeneous) transpose.

  • I convert then these points to camera by using image 02 in calibration's calib_cam_to_pose, also inverting it (as again it refers to cam to pose and I want pose to cam) and in the same manner as before to obtain the points in the camera of image 02 coordinate system.

  • I apply the intrinsics in image_02.yaml (gammas as focal lengths, u0 and v0 as offsets) to obtain the u, v and depth of each point

  • I discard points out of the u,v range as well as negative depth values. Then for each u,v I project the minimum depth.

After this process, the result is this:

Image

There are pixels only on the right of the image. What am I missing?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions