Same as the last one , First of all, a brief introduction to the use of OpenCV Example code for binocular calibration . The version I'm using is 3.x,4.x The difference is not big .

one ,stereo_calib.cpp Use of

       
The parameters need to be explained , The number of horizontal intersections of black and white squares on the checkerboard of the calibration board is w, The number of longitudinal is h.s The width of each grid , This width is very easy to fit in ps perhaps word And so on , recommend ps You can switch the unit to mm. In order to reduce the calibration image and improve the calibration accuracy ,3D Printed a camera holder and L A new calibration plate “ Fixing frame ”, The calibration board is designed as w=13,h=8. Modify the code approximately 349 Set the default parameter behavior around the line :
cv::CommandLineParser parser(argc, argv,
"{w|13|}{h|8|}{s|4.233333333|}{nr||}{help||}{@input|stereo_calib.xml|}");

And in stereo_calib.xml Enter the name of the image pair in —— It's not necessarily that more images are more accurate , Many problems of calibration results are caused by some factors “ It's not good enough ” Caused by pictures , therefore “ Enough ” Just fine . also , In order to observe and identify the accuracy of the corner position and the correctness of the order , Make a slight change to the procedure 370 All right :
StereoCalib(imagelist, boardSize, squareSize, true, true, showRectified);
Namely displayCorners The parameter is true To display the corner recognition order and location . If the observation time is too short , It can be modified 124 All right :
char c = (char)waitKey(1000);
The unit is Ms , Default in code 500.

two , Calibration results

You can see that I only used 4 Group pictures , Does it work , Just so —— The result measured by a ruler is consistent with the result selected by mouse , The precision is also in 1mm about .

Now? , We need it intrinsics.yml and extrinsics.yml It's time for binocular ranging :

three :stereo_match.cpp Code for

        The sheep's head is up , Serve dog meat :
Dim img1 As Mat = ImRead(My.Application.Info.DirectoryPath &
"\image\left00.jpg", ImreadModes.Color) Dim img2 As Mat =
ImRead(My.Application.Info.DirectoryPath & "\image\right00.jpg",
ImreadModes.Color) Dim img_size As Size = img1.Size() Dim roi1, roi2 As New
Rect Dim Q As New Mat Dim fs As FileStorage = New FileStorage("intrinsics.yml",
FileStorage.Mode.Read) Dim M1, D1, M2, D2 As Mat M1 = fs.Item("M1") D1 =
fs.Item("D1") M2 = fs.Item("M2") D2 = fs.Item("D2") fs.Open("extrinsics.yml",
FileStorage.Mode.Read) Dim R, T, R1, P1, R2, P2 As New Mat R = fs.Item("R") T =
fs.Item("T") StereoRectify(M1, D1, M2, D2, img_size, R, T, R1, R2, P1, P2, Q,
StereoRectificationFlags.ZeroDisparity, -1, img_size, roi1, roi2) Dim map11,
map12, map21, map22 As New Mat InitUndistortRectifyMap(M1, D1, R1, P1,
img_size, CV_16SC2, map11, map12) InitUndistortRectifyMap(M2, D2, R2, P2,
img_size, CV_16SC2, map21, map22) Dim img1r, img2r As New Mat Remap(img1,
img1r, map11, map12, InterpolationFlags.Linear) Remap(img2, img2r, map21,
map22, InterpolationFlags.Linear) img1 = img1r img2 = img2r
pnlLeft.BackgroundImage = ToBitmap(img1) pnlRight.BackgroundImage =
ToBitmap(img2)

Select the upper right corner of the pink version , You can see that the penultimate output value is 290mm. Read two first in the code yml file , Then the image is corrected and displayed , Then, the coordinates of the space points corresponding to the two points are calculated , use TriangulatePoints Function , Get one points4D value , The third one is related to Z Coordinate correspondence . If your need is to get a point cloud and want to use it OPENCV A new image matching algorithm based on Wavelet Transform , You can refer to stereo_match.cpp Code for , use BM,SGBM Wait for the algorithm to get it , The parameter setting method is the same as before , The meaning of the parameter is as follows ( Where there are mistakes, please correct them ):
img1_filename = samples::findFile(parser.get<std::string>(0)); // Left image of input
img2_filename = samples::findFile(parser.get<std::string>(1)); // Right image of input if
(parser.has("algorithm")) { std::string _alg =
parser.get<std::string>("algorithm"); // Algorithm used alg = _alg == "bm" ? STEREO_BM :
_alg == "sgbm" ? STEREO_SGBM : _alg == "hh" ? STEREO_HH : _alg == "var" ?
STEREO_VAR : _alg == "sgbm3way" ? STEREO_3WAY : -1; } numberOfDisparities =
parser.get<int>("max-disparity"); // Maximum parallax , The value should be divisible 16.STEREO_VAR normalization ([0,1])
SADWindowSize = parser.get<int>("blocksize"); //SAD Window size , The value is odd . Match window size scale =
parser.get<float>("scale"); // Scaling no_display = parser.has("no-display");
// Show results if( parser.has("i") ) intrinsic_filename = parser.get<std::string>("i");
// Input internal matrix if( parser.has("e") ) extrinsic_filename =
parser.get<std::string>("e"); // Input external matrix if( parser.has("o") ) disparity_filename
= parser.get<std::string>("o"); // Output difference image if( parser.has("p") )
point_cloud_filename = parser.get<std::string>("p"); // Output point cloud file
cv::CommandLineParser parser(argc, argv,
"{@arg1|left00.jpg|}{@arg2|right00.jpg|}{help
h||}{algorithm|sgbm|}{max-disparity|64|}{blocksize|5|}{no-display||}{scale|1|}{i|intrinsics.yml|}{e|extrinsics.yml|}{o||}{p||}");
among max-disparity and blocksize It needs careful debugging , The operation speed and effect of each algorithm are also different, you can do it yourself .

The rest is the feature matching of left and right images , You can see the exposure of the two cameras , white balance , Reflection caused by different angles @#$&%#^&#$OOXX There are many problems …… We have to overcome it bit by bit .

Technology