SOTAVerified

Speaker-Specific Lip to Speech Synthesis

How accurately can we infer an individual’s speech style and content from his/her lip movements? [1]

In this task, the model is trained on a specific speaker, or a very limited set of speakers.

[1] Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis, CVPR 2020.

Papers

Showing 14 of 4 papers

TitleStatusHype
Learning Individual Speaking Styles for Accurate Lip to Speech SynthesisCode1
Densely Connected Convolutional NetworksCode1
RobustL2S: Speaker-Specific Lip-to-Speech Synthesis exploiting Self-Supervised Representations0
Speech Reconstruction with Reminiscent Sound via Visual Voice MemoryCode0
Show:102550

No leaderboard results yet.