SOTAVerified

Talking Face Generation

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics

( Image credit: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation )

Papers

Showing 110 of 110 papers

TitleStatusHype
DisentTalk: Cross-lingual Talking Face Generation via Semantic Disentangled Diffusion Model0
UniSync: A Unified Framework for Audio-Visual Synchronization0
PC-Talk: Precise Facial Animation Control for Audio-Driven Talking Face Generation0
Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space Guided Diffusion0
JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video EditingCode3
Joint Co-Speech Gesture and Expressive Talking Face Generation using Diffusion with AdaptersCode1
GLCF: A Global-Local Multimodal Coherence Analysis Framework for Talking Face Generation Detection0
VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization0
PortraitTalk: Towards Customizable One-Shot Audio-to-Talking Face Generation0
Sonic: Shifting Focus to Global Audio Perception in Portrait Animation0
Show:102550
← PrevPage 1 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1EmoGenEmoAcc83.2Unverified
#ModelMetricClaimedVerifiedStatus
1LipGANLMD0.6Unverified