Hook: The broad scope of my MSCV project is “Driving in Bad Weather”. That’s quite a board research topic. During the MSCV program, I am focusing on develop robust scene understanding algorithms that aids safe autonomous driving in bad weather.
Motivation: In autonomous driving, training on clear weather images is straightforward, as we have an abundance of them with high-quality ground truth annotations (e.g., the Cityscapes dataset), allowing for supervised model training. However, training on bad weather images is challenging due to the time-consuming and labor-intensive process required to annotate these images with high-quality annotations. Therefore, in this project, we focus on the setting of unsupervised domain adaptation, where we utilize labeled clear weather images and unlabeled bad weather images.
Method 1: The first approach employs an unpaired image-to-image translation framework to convert clear weather images into bad weather images. As the background remains the same, we can transfer the ground truth labels from the clear weather images to the generated bad weather images. Subsequently, we fine-tune the model using these generated images. If the simulated bad weather images are realistic enough, the fine-tuned model should generalize well to actual bad weather scenes. This work is included in the Spring 2023 section.
Method 2: The second approach utilizes a domain adaptation framework. We initially pretrain it on sourced labeled clear weather images in a supervised manner, then adapt it to target unlabeled bad weather images in an unsupervised manner. The supervised stage, with ground truth labels, is straightforward to train. The unsupervised stage employs a self-training (or self-distillation) strategy, where the student model generates its predictions, and the teacher model, which is the exponential moving average of the student, provides ‘pseudo-labels’. This work is included in the Fall 2023 section.