SOTAVerified

Lip Reading

Lip Reading is a task to infer the speech content in a video by using only the visual information, especially the lip movements. It has many crucial applications in practice, such as assisting audio-based speech recognition, biometric authentication and aiding hearing-impaired people.

Source: Mutual Information Maximization for Effective Lip Reading

Papers

Showing 150 of 153 papers

TitleStatusHype
VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis0
SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer0
Transforming faces into video stories -- VideoFace2.0Code0
Development and evaluation of a deep learning algorithm for German word recognition from lip movements0
Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides0
VALLR: Visual ASR Language Model for Lip Reading0
Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven Hand Modeling for Barrier-free CommunicationCode0
Integrating Persian Lip Reading in Surena-V Humanoid Robot for Human-Robot Interaction0
GLaM-Sign: Greek Language Multimodal Lip Reading with Integrated Sign Language Accessibility0
LipGen: Viseme-Guided Lip Video Generation for Enhancing Visual Speech Recognition0
Spatio-temporal Transformers for Action Unit Classification with Event Cameras0
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective0
Neuromorphic Facial Analysis with Cross-Modal Supervision0
RAL:Redundancy-Aware Lipreading Model Based on Differential Learning with Symmetric Views0
Personalized Lip Reading: Adapting to Your Unique Lip Movements with Vision and LanguageCode1
Enhancing Speech-Driven 3D Facial Animation with Audio-Visual Guidance from Lip Reading Expert0
Robust Multi-Modal Speech In-Painting: A Sequence-to-Sequence Approach0
Audio-Visual Speech Recognition based on Regulated Transformer and Spatio-Temporal Fusion Strategy for Driver Assistive SystemsCode0
Bridge to Non-Barrier Communication: Gloss-Prompted Fine-grained Cued Speech Gesture Generation with Diffusion Model0
MTGA: Multi-View Temporal Granularity Aligned Aggregation for Event-Based Lip-ReadingCode0
Enhancing Lip Reading with Multi-Scale Video and Multi-Encoder0
Landmark-Guided Cross-Speaker Lip Reading with Mutual Information Regularization0
Where Visual Speech Meets Language: VSP-LLM Framework for Efficient and Context-Aware Visual Speech ProcessingCode3
Cross-Attention Fusion of Visual and Geometric Features for Large Vocabulary Arabic Lipreading0
Computation and Parameter Efficient Multi-Modal Fusion Transformer for Cued Speech Recognition0
Neural Text to Articulate Talk: Deep Text to Audiovisual Speech Synthesis achieving both Auditory and Photo-realismCode1
Do VSR Models Generalize Beyond LRS3?Code1
Exploring Lip Segmentation Techniques in Computer Vision: A Comparative Analysis0
DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D Facial AnimationCode0
Learning Separable Hidden Unit Contributions for Speaker-Adaptive Lip-ReadingCode0
End-to-End Lip Reading in Romanian with Cross-Lingual Domain Adaptation and Lateral Inhibition0
Lip Reading for Low-resource Languages by Learning and Combining General Speech Knowledge and Language-specific Knowledge0
Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping0
Leveraging Visemes for Better Visual Speech Representation and Lip Reading0
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking FacesCode1
Emotional Speech-Driven Animation with Content-Emotion Disentanglement0
OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality AlignmentCode1
LipVoicer: Generating Speech from Silent Videos Guided by Lip ReadingCode1
A Novel Interpretable and Generalizable Re-synchronization Model for Cued Speech based on a Multi-Cuer CorpusCode0
Deep Learning-based Spatio Temporal Facial Feature Visual Speech Recognition0
PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors0
Word-level Persian Lipreading Dataset0
SynthVSR: Scaling Up Visual Speech Recognition With Synthetic Supervision0
Seeing What You Said: Talking Face Generation Guided by a Lip Reading ExpertCode2
A large-scale multimodal dataset of human speech recognition0
MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and RecognitionCode1
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices0
GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face SynthesisCode4
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset0
OLKAVS: An Open Large-Scale Korean Audio-Visual Speech DatasetCode1
Show:102550
← PrevPage 1 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Lip2WavWER14.08Unverified
#ModelMetricClaimedVerifiedStatus
1Lip2WavWER34.2Unverified
#ModelMetricClaimedVerifiedStatus
1Lip2WavWER31.26Unverified