SOTAVerified

EMOCA: Emotion Driven Monocular Face Capture and Animation

2022-04-24CVPR 2022Code Available2· sign in to hype

Radek Danecek, Michael J. Black, Timo Bolkart

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

As 3D facial avatars become more widely used for communication, it is critical that they faithfully convey emotion. Unfortunately, the best recent methods that regress parametric 3D face models from monocular images are unable to capture the full spectrum of facial expression, such as subtle or extreme emotions. We find the standard reconstruction metrics used for training (landmark reprojection error, photometric error, and face recognition loss) are insufficient to capture high-fidelity expressions. The result is facial geometries that do not match the emotional content of the input image. We address this with EMOCA (EMOtion Capture and Animation), by introducing a novel deep perceptual emotion consistency loss during training, which helps ensure that the reconstructed 3D expression matches the expression depicted in the input image. While EMOCA achieves 3D reconstruction errors that are on par with the current best methods, it significantly outperforms them in terms of the quality of the reconstructed expression and the perceived emotional content. We also directly regress levels of valence and arousal and classify basic expressions from the estimated 3D face parameters. On the task of in-the-wild emotion recognition, our purely geometric approach is on par with the best image-based methods, highlighting the value of 3D geometry in analyzing human behavior. The model and code are publicly available at https://emoca.is.tue.mpg.de.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
REALYEMOCA-call2.1Unverified
REALYEMOCA-fall2.39Unverified
REALY (side-view)EMOCA-call2.13Unverified
REALY (side-view)EMOCA-fall2.4Unverified

Reproductions