SOTAVerified

Talking Face Generation

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics

( Image credit: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation )

Papers

Showing 8190 of 110 papers

TitleStatusHype
DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder0
UniFLG: Unified Facial Landmark Generator from Text or Speech0
Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation0
EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation0
LipFormer: High-Fidelity and Generalizable Talking Face Generation With a Pre-Learned Facial Codebook0
Emotional Talking Faces: Making Videos More Expressive and Realistic0
Memories are One-to-Many Mapping Alleviators in Talking Face Generation0
SyncTalkFace: Talking Face Generation with Precise Lip-Syncing via Audio-Lip Memory0
Taiwanese-Accented Mandarin and English Multi-Speaker Talking-Face Synthesis System0
StableFace: Analyzing and Improving Motion Stability for Talking Face Generation0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1EmoGenEmoAcc83.2Unverified
#ModelMetricClaimedVerifiedStatus
1LipGANLMD0.6Unverified