SOTAVerified

Transformer Network for Semantically-Aware and Speech-Driven Upper-Face Generation

2021-10-09Code Available0· sign in to hype

Mireille Fares, Catherine Pelachaud, Nicolas Obin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a semantically-aware speech driven model to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA). In this work, we aim to produce natural and continuous head motion and upper-facial gestures synchronized with speech. We propose a model that generates these gestures based on multimodal input features: the first modality is text, and the second one is speech prosody. Our model makes use of Transformers and Convolutions to map the multimodal features that correspond to an utterance to continuous eyebrows and head gestures. We conduct subjective and objective evaluations to validate our approach and compare it with state of the art.

Tasks

Reproductions