SOTAVerified

Talking Face Generation

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics

( Image credit: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation )

Papers

Showing 4150 of 110 papers

TitleStatusHype
DiffDub: Person-generic Visual Dubbing Using Inpainting Renderer with Diffusion Auto-encoderCode1
HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face GenerationCode2
HDTR-Net: A Real-Time High-Definition Teeth Restoration Network for Arbitrary Talking Face Generation MethodsCode1
ToonTalker: Cross-Domain Face Reenactment0
VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer0
Audio-driven Talking Face Generation with Stabilized Synchronization Loss0
FTFDNet: Learning to Detect Talking Face Video Manipulation with Tri-Modality Interaction0
Instruct-NeuralTalker: Editing Audio-Driven Talking Radiance Fields with Instructions0
Exploring Phonetic Context-Aware Lip-Sync For Talking Face Generation0
CPNet: Exploiting CLIP-based Attention Condenser and Probability Map Guidance for High-fidelity Talking Face Generation0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1EmoGenEmoAcc83.2Unverified
#ModelMetricClaimedVerifiedStatus
1LipGANLMD0.6Unverified