Motivation
Depth estimation is one of the most critical tasks in a perception pipeline as most downstream components such as SLAM, object detection, and motion planner all require high-confidence depth estimation in order to effectively operate. However, vision-based depth estimation approaches face two issues: (1) Requires ground truth depth for supervision, (2) Datasets are heavily focused on day-time only. As such, we focus on evaluating and extending self-supervised depth estimation approaches to handle night-time environments.
Problem
Our task is to evaluate the performance of existing depth estimation approaches on night-time specific data, and reduce the gap between day-light and low-light operation.
Solution
Our primary approach is to utilize existing works in depth estimation and leverage multiple passive sensors that could operate in low-light or no-light, including thermal, light intensified cameras, and other IR cameras. Lastly, we will explore fusing data across these sensors to create an effective multi-modal stereo depth estimation pipeline for night-time operation.
Our code is available at https://github.com/night-time-stereo/mono-thermal.