Project Overview

This project aims at understanding kitchen in 3D to enable robot learning by watching video demonstrations.

The first part of the project, the data collection part, includes the following components: building a 3D kitchen capture system, reconstruct a real-world kitchen, capture 3D recordings of kitchen tasks, and developing rich 3D annotations. Here is an illustration from the project pitch:

In the spring semester, we have developed a synchronized multi-camera capture system. The extrinsic pose of each camera is estimated by detecting AprilTags. Here is our miniature camera setup, which we are going to move to the actual kitchen:

We can then capture a 3D video jointly with the cameras from different views:

We also use off-the-shelf models including ViTPose for human pose estimation:

In the next semester, we will continue our project by introducing semantic understanding of 3D kitchen scenes and build pipeline of visual imitation learning.

Spring Progress

Here is what we have done in the spring of 2023. Our presentation for the paper survey is here.

Survey over Existing Datasets

GMU-Kitchen is a dataset consisting of 9 RGB-D kitchen video sequences with object annotations. It lacks some details, and does not contain any human activity.

GMU-Kitchen

ScanNet is a dataset of over 1600 indoor scenes with detailed annotations.

There are also a couple of other datasets, but in they contain either only static scenes or no 3D information. A table here shows the difference:

Dataset3D InformationHuman ActivityKitchen
Epic Kitchen
GMU Kitchen
MSR Action3D
ScanNet
Comparison of Some Existing Datasets

Camera Setup

We have built a miniature synchronized multi-camera capture system in our lab and a digital twin in Mujoco to test our calibration and scene understanding algorithms; we plan to replicate the system in a real kitchen later.

We then use AprilTags to define our world coordinates and calibrate camera extrinsics.

Scene Segmentation

  • Used SAM to perform image segmentation
  • Transfer the results from pixels to 3D points using camera pose

Human Pose Estimation

We use SAM and ViTPose to segment humans and estimate poses.

Future Work

Our next steps will be as follows:

  • Semantic understanding of 3D kitchen scenes.
  • Literature survey of visual imitation learning algorithms.
  • Build a pipeline that allows a robot to learn generalizable manipulation skills consequently helping them to perform tasks by watching demonstration videos.

Team

Tianwen Fu

Tianwen Fu is a masters student in Computer Vision at Carnegie Mellon University. He received my bachelor degree in science at the Chinese University of Hong Kong, where he worked with Prof. Chi-wing Fu in graphics. From 2020 to 2021, he worked as a research intern at SenseTime. He was supervised by Prof. Jifeng Dai on AutoML on vision tasks.

Achleshwar Luthra

Achleshwar Luthra is a masters student in Computer Vision at the Robotics Institute, Carnegie Mellon University. He completed his undergrad from BITS Pilani majoring in Electrical and Electronics Engineering. Prior to CMU, he has done research internships at UIUC and UC Berkeley on 3D reconstruction from images and videos.

Ben Eisner (Supervisor)

Ben Eisner is a Machine Learning and Robotics researcher. He builds robotic systems that learn to interact with the unstructured world. He is currently a Ph.D. student in the Robotics Institute at Carnegie Mellon University. He is a member of the Robots Perceiving and Doing Lab, led by Prof. David Held.

David Held (Supervisor)

David Held is an assistant professor at Carnegie Mellon University in the Robotics Institute and is the director of the RPAD lab: Robots Perceiving And Doing. His research focuses on perceptual robot learning, i.e. developing new methods at the intersection of robot perception and planning for robots to learn to interact with novel, perceptually challenging, and deformable objects. David has applied these ideas to robot manipulation and autonomous driving.