SOTAVerified

Talking Face Generation

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics

( Image credit: Talking Face Generation by Adversarially Disentangled Audio-Visual Representation )

Papers

Showing 4150 of 110 papers

TitleStatusHype
VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization0
PortraitTalk: Towards Customizable One-Shot Audio-to-Talking Face Generation0
Sonic: Shifting Focus to Global Audio Perception in Portrait Animation0
MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes0
JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation0
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads0
SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing0
High-fidelity and Lip-synced Talking Face Synthesis via Landmark-based Diffusion Model0
Emotional Conversation: Empowering Talking Faces with Cohesive Expression, Gaze and Pose Generation0
OpFlowTalker: Realistic and Natural Talking Face Generation via Optical Flow Guidance0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1EmoGenEmoAcc83.2Unverified
#ModelMetricClaimedVerifiedStatus
1LipGANLMD0.6Unverified