Visual SLAM relies on calibrated parameters of stereo cameras. However, these parameters might vary over time due to the robot bumping into things or thermal expansion of the camera rig. A method which can re-calibrate the extrinsic of a stereo pair can significantly improve robot ‘s autonomy and its deployment time.
In this project, we work on the online calibration and make improvements on both the front end and the back end of SLAM. For the front end, we adopt deep learning based interest points and use geometric optimization to improve calibration. In the back end, we formulate a factor graph optimization which can precisely recover the ground truth calibration parameters in realtime. Experiments are conducted to demonstrate the calibration performance and various ablation analysis are shown.