Home 3D Reconstruction Of Planar Markers usin OpenCV
Reply: 0

3D Reconstruction Of Planar Markers usin OpenCV

user3614 Published in May 25, 2018, 2:18 pm

I am trying to perform 3D Reconstruction(Structure From Motion) from Multiple Images of Planar Markers. I very new to MVG and openCV.

As far I have understood I have to do the following steps:

  1. Identify corresponding 2D corner points in the one images.
  2. Calculate the Camera Pose of the first image us cv::solvePNP(assuming the origin to be center of the marker).
  3. Repeat 1 and 2 for the second image.
  4. Estimate the relative motion of the camera by Rot_relative = R2 - R1, Trans_relative = T2-T1.
  5. Now assume the first camera to be the origin construct the 3x4 Projection Matrix for both views, P1 =[I|0]*CameraMatrix(known by Calibration) and P2 = [Rot_relative |Trans_relative ].
  6. Use the created projection matrices and 2D corner points to triangulate the 3D coordinate using cv::triangulatePoints(P1,P2,point1,point2,OutMat)
  7. The 3D coordinate can be found by dividing the each rows of OutMat by the 4th row.
  8. I was hoping to keep my "First View" as my origin and iterate through n views repeating steps from 1-7(I suppose its called Global SFM).

I was hoping to get (n-1)3D points of the corners with "The first View as origin" which we could optimize using Bundle Adjustment.

But the result I get is very disappointing the 3D points calculated are displaced by a huge factor.

These are questions:

1.Is there something wrong with the steps I followed?

2.Should I use cv::findHomography() and cv::decomposeHomographyMat() to find the relative motion of the camera?

3.Should point1 and point2 in cv::triangulatePoints(P1,P2,point1,point2,OutMat) be normalized and undistorted? If yes, how should the "Outmat" be interpreted?

Please anyone who has insights towards the topic, can you point out my mistake?

P.S. I have come to above understanding after reading "MultiView Geometry in Computer Vision"

Please find the code snippet below:

cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f> 
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
    cv::Mat rVectFirstView, tVecFristView;
    cv::Mat rVectSecondView, tVecSecondView;
    cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
    cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);

    cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
    cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);

    cv::Rodrigues(rVectFirstView, RotMatFirstView);
    cv::Rodrigues(rVectSecondView, RotMatSecondView);

    cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
    cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;

    cv::Mat RelativePose; 

    cv::hconcat(RelativeRot, RelativeTrans, RelativePose);

    cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);

    cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
    cv::Mat X;
    cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
    cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);

    cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
    X.row(0) = X.row(0) / X.row(3);
    X.row(1) = X.row(1) / X.row(3);
    X.row(2) = X.row(2) / X.row(3);

    return X;
You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.308878 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO