Introduction

Depth Estimation

Holonomic robots such as drones have six degrees of freedom and thus, can move in all directions. Human drivers, too, rely on a wider field of view rather than just looking forward to making decisions. In the case of autonomous vehicles and self-driving cars, a LIDAR helps overcome this.

However, LiDAR faces challenges in various scenarios such as the SEARCH scenario where accurate depth estimation is required for obstacle detection, and in ADAS scenario, it struggles with detecting dark or shiny objects, and the dazzling phenomenon. For mapping and obstacle avoidance in holonomic robots like drones, LiDAR’s limitations include high form-factor, heavy weight, high power consumption, limited field of view, and sparse LiDAR points per object. Additionally, cost can be a significant drawback with multiple LiDARs required per vehicle.

Therefore, we want a LIDAR-free system. We plan to use fisheye images as these capture more information due to their wider field of view.

Phase 1: Spring 2023

We finalized the AirSim dataset collection pipeline for our rig setup as mentioned on the Setup page. Then, we reviewed relevant literature and formulated two parallel approaches detailed on the Spring 2023 page.

Presentation slides and poster are publicly available.

Phase 2: Fall 2023

Due to the Sim2Real domain shift and minimal improvements with various convolution-based approaches, we switch to leveraging recentĀ foundation models, like DINOv2, for fisheye depth estimation and evaluate their performance on various environments and datasets which are detailed on the Fall 2023 page.

Presentation slides and poster are publicly available.