Resources

Our project has been wrapped up into a publication. Code and paper could be found in our project page.

In this semester:

  1. We developed a lightweight plug-in module, the amodal expander, to transform standard, modal trackers into amodal ones through fine-tuning on a few hundred video sequences with data augmentation.
  2. We introduced PasteNOcclude, a data augmentation technicuqe for crafting occlusion scenarios, that benefits the model in perceiving occluded objects.
  3. Amodal Expander trained along with PasteNOcclude achieves a 3.3% and 1.6% improvement on the detection and tracking of occluded objects on TAO-Amodal. When evaluated on people, our method produces dramatic improvements of 2x compared to state-of-the-art modal baselines.
  • Presentation Slides (link)

In this semester, our progress can be categorized into three folds:

  1. Building the largest amodal object detection and tracking benchmark, TAO-Amodal dataset
  2. Extensive survey into amodal detection/tracking methods
  3. Proposed our amodal detection baseline
  • Presentation Video (Incoming)
  • Presentation Slides (link)
  • Poster (link)