Depth Estimation
Holonomic robots such as drones have six degrees of freedom and thus, can move in all directions. Human drivers, too, rely on a wider field of view rather than just looking forward to making decisions. In the case of autonomous vehicles and self-driving cars, a LIDAR helps overcome this.
However, LiDAR faces challenges in various scenarios such as the SEARCH scenario where accurate depth estimation is required for obstacle detection, and in ADAS scenario, it struggles with detecting dark or shiny objects, and the dazzling phenomenon. For mapping and obstacle avoidance in holonomic robots like drones, LiDAR’s limitations include high form-factor, heavy weight, high power consumption, limited field of view, and sparse LiDAR points per object. Additionally, cost can be a significant drawback with multiple LiDARs required per vehicle.
Therefore, we want a LIDAR-free system. We plan to use fisheye images as these capture more information due to their wider field of view.
Phase 1: Spring 2023
We finalized the dataset collection pipeline for our rig setup as mentioned on the Setup page. Then, we reviewed relevant literature and formulated two parallel approaches detailed on the Methods page.
Presentation slides and poster are publicly available.
Phase 2: Fall 2023
We plan to test out the temporal-based and fractional convolution approaches on the simulated AirSim data by benchmarking it against the current OmniMVS results. We plan to have TensorRT optimization (besides model optimization) that yields approximately 1.6x speedups for model latency while testing the real-time performance.