3

I want to perform camera calibration with OpenCV C++ API, using a set of known world to image point matches.

OpenCV has a function called cv::calibrateCamera as documented here. This mention clearly that the function will deduce the intrinsic camera matrix for planar objects and that it expects the user to specify the matrix for non-planar 3D environments.

In my point correspondences, the world coordinates are not planar. And I do not have a qualified guess for the internal camera matrix.

How would I go about calibrating the camera in this case?

Currently, I am using a simple DLT based approach for the calculation using the cv::SVD::solveZ function. But I would like to use the non-linear estimation that OpenCV performs.

1
  • If you can afford to run your camera tracking offline in a separate program on Windows, take a look at ACTS. I had the same problem you have, but ACTS does a well enough job on camera calibration. I'm sorry I can't help you with an OpenCV implementation. Commented Feb 16, 2013 at 10:27

2 Answers 2

2

This page explains how to perform camera auto-calibration. This includes a method using Kruppa equations which appears to be solvable using the non-linear techniques you desire.

Sign up to request clarification or add additional context in comments.

Comments

0

I was in the same situation: I have a non-planar 3D target, however I wanted to use OpenCV's non-linear LM-optimization for the calibration process. (Zhang's initialization method used by OpenCV only allows for planar calibration targets)

What you can do is to extract the camera matrix from your own DLT result and use this as an initial guess for calibrateCamera. It is sufficient if done for one pair, only (camera points - object points). Even though the other pairs might produce other camera matrices, they will hopefully be similar and you'll need that matrix only for initialization anyways.

Note, I do assume though, that with your own DLT you obtain a projection matrix P which maps homogeneous world points X to hom. image points x via x = P * X.

This would be the way to go, it is in python though, you should be able to adapt to your own needs:

P = YOUR_DLT(imagePoints[0], objectPoints[0])

cameraMatrix, _, _, _, _, _, _ = cv2.decomposeProjectionMatrix(P)
cameraMatrix /= cameraMatrix[2,2]            # ensure unit elem[2,2]
cameraMatrix[0,1] = 0                        # ensure no skew
cameraMatrix[0,0] = abs(cameraMatrix[0,0])   # ensure positive focal lengthes
cameraMatrix[1,1] = abs(cameraMatrix[1,1])

# ensure principal point within image:
cameraMatrix[0,2] = min(resX-1, max(0, cameraMatrix[0,2]))
cameraMatrix[1,2] = min(resY-1, max(0, cameraMatrix[1,2])) 

retval, cameraMatrix, distCoeffs, rvecs, tvecs = \
      cv2.calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix) 

Note, since calibrateCamera assumes cameraMatrix[2,2]==1 and is constrained to positive focal lengths and 0 skew, the camera matrix likely needs to be corrected, as I've showed in the code above.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.