Extend DDRM to Diverse Artifact Types
We plan to apply DDRM beyond specular highlights to additional artifacts such as motion blur, inpainting obstructions, and bubbles. This will test the model’s generalization across varied clinical conditions.
Train a Pixel-wise YOLO Segmentation Model for Inpainting Guidance
A dedicated YOLO-based segmentation model will be trained to localize artifact regions at the pixel level. These masks will guide the generative inpainting process, enabling more precise and interpretable results.
Explore Classical Computer Vision Techniques for Under-exposure
We will experiment with traditional image enhancement methods such as histogram equalization, CLAHE, and adaptive gamma correction to address under-exposure in a computationally efficient manner.
Develop a Real-Time Inference Pipeline
To ensure clinical applicability, our goal is to optimize the full pipeline for real-time processing through model compression and batch inference.
Reconstruct Artifact-Free Video with Temporal Diffusion Models
We aim to integrate temporal-aware generative models (e.g., video diffusion) to maintain temporal consistency and produce coherent, artifact-free videos from restored image frames.
Evaluate on a More Diverse Dataset
We will validate our approach on broader and more diverse datasets—including different organ systems, lighting conditions, and procedures—to assess generalizability and robustness.