SOTAVerified

Expressive Visual Text-to-Speech Using Active Appearance Models

2013-06-01CVPR 2013Unverified0· sign in to hype

Robert Anderson, Bjorn Stenger, Vincent Wan, Roberto Cipolla

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems.

Tasks

Reproductions