SOTAVerified

Talking Face Generation

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics

( Image credit: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation )

Papers

Showing 6170 of 110 papers

TitleStatusHype
CP-EB: Talking Face Generation with Controllable Pose and Eye Blinking Embedding0
CPNet: Exploiting CLIP-based Attention Condenser and Probability Map Guidance for High-fidelity Talking Face Generation0
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs0
DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder0
Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation0
DisentTalk: Cross-lingual Talking Face Generation via Semantic Disentangled Diffusion Model0
DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation0
EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model0
EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation0
EmoSpeaker: One-shot Fine-grained Emotion-Controlled Talking Face Generation0
Show:102550
← PrevPage 7 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1EmoGenEmoAcc83.2Unverified
#ModelMetricClaimedVerifiedStatus
1LipGANLMD0.6Unverified