FRL Pittsburgh is on a quest to build photorealistic face avatars from high quality capture studio data, aiming to appear as close to real life as possible. However, the real use of these avatars is not simply reconstructing those high quality data, but being animated in realtime from limited sensors mounted on VR headset at the same fidelity, to make social interactions in VR believable. The goal of this project is to tackle the following two main challenges:
(1) these sensors only see non-overlapping patches from the cameras very close to face in different spectrum (infrared)
(2) the built system should be robust to variations of input, caused by factors such as slightly different camera positions VR headset, user’s headset wearing positions on the face, and lighting/background variations in the environment.
Problem Statement
You can see our methods and results in the semester wise pages https://mscvprojects.ri.cmu.edu/2020teamm/spring-semester/