Our project is called Dynamic Implicit Neural Representation for Avatar Animation. We would like to build a photorealistic human avatar for which the facial expression and head pose can be explicitly controlled. Some of the social motivations include the use of a human avatar as a digital twin the AR/VR space. It can also have applications in the entertainment industry for overlaying another language on top for seamless dubbing. In a more technical fashion this is useful to generate synthetic data thus increasing the data space size.
Our main research objective is to build a model that can dynamically change expressions through the use of Neural Radiance Fields (NeRF) and Generative Adversarial Networks (GAN).
We will be using MoFaNeRF: Morphable Facial Neural Radiance Fields as the basis of our experiments. Our project is split into two primary categories.
- Generalizability: Given the dataset that MoFaNeRF is trained on, we would like to generalize to a more diverse population
- Controllability: We would like to control the expressions using Facial Action Encoding System (FACS)
You can find the technical details for Spring 2022 here.
You can find the technical details for Fall 2022 here