Visual SLAM relies on calibrated parameters of stereo cameras. However, these parameters might vary over time due to the robot bumping into things or thermal expansion of the camera rig. A method which can re-calibrate the extrinsic of a stereo pair can significantly improve robot ‘s autonomy and its deployment time.
In this project, we work on the online calibration and make improvements on both the front end and the back end of SLAM. For the front end, we adopt deep learning based interest points and use geometric optimization to improve calibration. In the back end, we formulate a factor graph optimization which can precisely recover the ground truth calibration parameters in realtime. Experiments are conducted to demonstrate the calibration performance and various ablation analysis are shown.
This project is aimed at estimating relative extrinsic of a stereo camera of a ground robot. This needs to be done in an online setting while the robot is operational in the real world as opposed to offline calibration using a chessboard. Such online calibration can increase robot deployment time and make the robot robust to deviations in camera parameters occurring due to physical factors like minor collisions or thermal expansion of the stereo camera rig.
Visual SLAM algorithms typically fuse data from stereo cameras and IMU requiring calibration of intrinsic and extrinsic properties. Factory calibration typically provides sufficient estimation if these parameters however in many systems these parameters might vary over time due to thermal changes or physical misalignment due to internal or external factors.It would be helpful to understand how to determine which parameters actually change in a given system design and how best to accommodate this change by implementing an online calibration system to mitigate the impact of changes over time. Additionally, it would be helpful to understand the impact to the fused sensor data due to these changes.