The autonomous driving industry has grown significantly in recent years. A major hurdle in this progress is the challenge of perceiving a vehicle’s environment accurately during adverse weather conditions like rain, snow, and fog. Such weather can obscure the view of objects and introduce visual distortions such as reflections or light dispersion, which are not evident in clear weather. The lack of sufficient training data for these conditions often results in suboptimal performance from perception algorithms.
Beyond weather, static obstacles such as road work also pose significant challenges. To tackle this, we focus on 3D reconstruction of such scenarios, which enable planning algorithms to test navigation through construction zones. This approach enhances autonomous driving automation while reducing human intervention.
Our first objective is to improve the detection model’s performance in bad weather.
The baseline trains the detector using clear images. However, a model trained only on clear-day images struggles with bad weather like fog, rain, or snow, where visual features change significantly. To address this, one approach is to synthesize bad weather data and train the detector on it, enhancing its robustness in challenging conditions. Another is De-weathering. We convert bad weather images back into clear ones, allowing the detector to perform as though it’s always working in ideal conditions.

Our second objective is to utilize NeRF for defogging. We collected street view data using CARLA, an autonomous driving simulation platform, where the camera’s position and attributes can be customized to facilitate the acquisition of per-pixel depth within the scene. With the depth information, we first used the atmospheric scattering model to generate foggy images, which served as the ground truth for NeRF supervision.
Our third objective is to reconstruct 3D scene from 2D image sequences. We leverage Metric3D model and DROID-SLAM to fulfill this.
More details will be illustrated in the following sections.