SOTAVerified

UniTAF: A Modular Framework for Joint Text-to-Speech and Audio-to-Face Modeling

2026-03-03Code Available0· sign in to hype

Qiangong Zhou, Nagasaka Tomohiro

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This work considers merging two independent models, TTS and A2F, into a unified model to enable internal feature transfer, thereby improving the consistency between audio and facial expressions generated from text. We also discuss the extension of the emotion control mechanism from TTS to the joint model. This work does not aim to showcase generation quality; instead, from a system design perspective, it validates the feasibility of reusing intermediate representations from TTS for joint modeling of speech and facial expressions, and provides engineering practice references for subsequent speech expression co-design. The project code has been open source at: https://github.com/GoldenFishes/UniTAF

Reproductions