cv::StereoCalibrate source code analysis -- CvLevMarq solver

Posted by rlhms09 on Thu, 11 Nov 2021 05:50:11 +0100

For work reasons, it is necessary to calibrate the camera on a large scale. There is an obvious problem with the result of cv::omni::StereoCalibrate. There is no relevant analysis on the Internet. Only how to use this function, so I can only read the source code of StereoCalibrate myself.

opencv version 4.1.1

The steps of StereoCalibrate are not very difficult. They can be summarized as follows: 1. Check the input value

2. Determine the optimization variables and solve an initial value for all variables to be optimized

3. Use LMsolver to solve

First read the CvLevMarq solver file

Its header file calib3d/calib3d_c.h  

Source file calib3d/src/compat_ptsetreg.cpp

Solver run:

1. The initial definition CvLevMarq() gives some default values, and the limiting solution method is SVD

2. Clearing variables and initializing prevParam, param, JtJ, JtErr dynamic pointers bind memory state = STARTED;

3. Call cvlevmarq:: updatealt for the first time (const cvmat * & _param, cvmat * & _jtj, cvmat * & _jterr, double * & _errnorm)

Due to the status bit STARTED, the input pointer will be bound to the corresponding parameter pointer of CvLevMarq class. Meanwhile, due to cvZero, all variables are assigned the initial value of 0, and the status bit is changed

4. Call updateAlt the second time, and the status bit is CALC_J. Because pointer binding was performed in the previous step, the in class variable CV:: PTR < cvmat > JTJ and the input formal parameter cvmat * &_ JTJ refers to the same piece of memory (the same for others)

First, make a deep copy of the in class variable param, param=prevParam, and execute the step function

5. Step one-step solution first extracts the variables to be optimized, then calculates an SVD, and obtains Δ 10. Then updated X

6. After that, I was confused and commented directly on the source code

//iteration
bool CvLevMarq::updateAlt( const CvMat*& _param, CvMat*& _JtJ, CvMat*& _JtErr, double*& _errNorm )
{
    CV_Assert( !err );
    if( state == DONE )
    {
        _param = param;
        return false;
    }

    if( state == STARTED )
    {
        _param = param;
        cvZero( JtJ );
        cvZero( JtErr );
        errNorm = 0;
        _JtJ = JtJ;
        _JtErr = JtErr;
        //The initial bound memory is 0
        _errNorm = &errNorm;
        state = CALC_J;
        return true;
    }

    if( state == CALC_J )
    {
        cvCopy( param, prevParam );
        step();
        _param = param;
        //errNorm and_ errNorm shared memory to copy the input error value prevrnorm before optimization in the current step
        prevErrNorm = errNorm;
        //Then it means empty
        errNorm = 0;
        //Rebind pointer
        _errNorm = &errNorm;
        //Change status
        state = CHECK_ERR;
        return true;
    }

    assert( state == CHECK_ERR );
    //I don't understand these steps. errNorm should be the current re projection error, and prevrnorm is the previous re projection error,
    //If the current error is worse than the error before the previous optimization????? It means that the optimization does not work???? Shouldn't it be the ratio of the falling value of the actual function to the falling value of the approximate model??
    if( errNorm > prevErrNorm )
    {
        //Increase the value of lambdaLg10 (related to the regular term), and control the maximum value of lambdaLg10 to 16, which is more inclined to gradient descent??
        if( ++lambdaLg10 <= 16 )
        {
            //Perform single step optimization
            step();
            _param = param;
            errNorm = 0;
            _errNorm = &errNorm;
            state = CHECK_ERR;
            return true;
        }
    }
    //Reduce the value of lambdaLg10 and control it to be - 16 at the minimum, which is more inclined to Gauss Newton??
    lambdaLg10 = MAX(lambdaLg10-1, -16);
    //Check whether the iteration steps and optimization length meet the iteration termination conditions, and return false if they are met
    if( ++iters >= criteria.max_iter ||
        cvNorm(param, prevParam, CV_RELATIVE_L2) < criteria.epsilon )
    {
        _param = param;
        _JtJ = JtJ;
        _JtErr = JtErr;
        state = DONE;
        return false;
    }
    //Clear JtJ and JtErr flag and change to CALC_J don't know if I understand that there is a problem. If the optimization is declining, the first data comes in and the step drops, the second data comes in and only compares, and the third data comes in and then the step drops. Won't you lose some data???
    prevErrNorm = errNorm;
    cvZero( JtJ );
    cvZero( JtErr );
    _param = param;
    _JtJ = JtJ;
    _JtErr = JtErr;
    state = CALC_J;
    return true;
}


void CvLevMarq::step()
{
    using namespace cv;
    const double LOG10 = log(10.);
    //
    double lambda = exp(lambdaLg10*LOG10);
    // Number of parameters to be optimized
    int nparams = param->rows;
    //The type transformation mask ignores variables, and this vector has been assigned outside the function
    Mat _JtJ = cvarrToMat(JtJ);
    Mat _mask = cvarrToMat(mask);
    // Is the number of variables to be optimized (some internal parameters are not optimized)
    int nparams_nz = countNonZero(_mask);
    // JtJN is empty for the first time, so it meets the conditions (in normal operation, only the conditions will be met for the first time)
    // JtJN JtJV JtJW initialized
    if(!JtJN || JtJN->rows != nparams_nz) {
        // prevent re-allocation in every step
        JtJN.reset(cvCreateMat( nparams_nz, nparams_nz, CV_64F ));
        JtJV.reset(cvCreateMat( nparams_nz, 1, CV_64F ));
        JtJW.reset(cvCreateMat( nparams_nz, 1, CV_64F ));
    }
    //Type transformation, shared memory
    Mat _JtJN = cvarrToMat(JtJN);
    Mat _JtErr = cvarrToMat(JtJV);
    Mat_<double> nonzero_param = cvarrToMat(JtJW);
    //JtErr is an in class variable that points to the memory corresponding to updateAlt_ JtErr, subMatrix means taking part of the matrix, which is easy to understand. It is equivalent to crossing out some variables
    subMatrix(cvarrToMat(JtErr), _JtErr, std::vector<uchar>(1, 1), _mask);
    //_ JtJN is the same as above. At this time, JtJN is the deleted JtJ, and JtJV is the deleted JtErr
    subMatrix(_JtJ, _JtJN, _mask, _mask);
    //This step guarantees_ JtJN is a diagonal matrix. This is because the formal parameter entered by updateAlt is the upper right matrix. In this step, I assigned a value. I guess err should judge whether there is a problem with LM. I didn't find any change
    if( !err )
        completeSymm( _JtJN, completeSymmFlag );
    //There's nothing to say
    _JtJN.diag() *= 1. + lambda;
    //SVD solution, find a (H+ λ I) Δ X = g Δ Find X and assign it to nonzero_param / * Note nonzero_param, JtJW shared memory*/
    solve(_JtJN, _JtErr, nonzero_param, solveMethod);
    int j = 0;
    //Assign x to each variable to be optimized in param- Δ X 
    for( int i = 0; i < nparams; i++ )
        param->data.db[i] = prevParam->data.db[i] - (mask->data.ptr[i] ? nonzero_param(j++) : 0);
}

Its one-step iteration is the traditional method of LM, no doubt

However, there are still some questions about when to iterate in one step and the value change conditions of regular terms.

Considering the general camera calibration, internal parameters are required to be optimized, which are internal parameters + external parameters. Considering the sparsity of external parameters, Shure compensation can be used for acceleration.

Or use adaptive trust region to accelerate

Topics: OpenCV Computer Vision