SOTAVerified

Lip Reading

Lip Reading is a task to infer the speech content in a video by using only the visual information, especially the lip movements. It has many crucial applications in practice, such as assisting audio-based speech recognition, biometric authentication and aiding hearing-impaired people.

Source: Mutual Information Maximization for Effective Lip Reading

Papers

Showing 51100 of 153 papers

TitleStatusHype
Enhancing Lip Reading with Multi-Scale Video and Multi-Encoder0
Landmark-Guided Cross-Speaker Lip Reading with Mutual Information Regularization0
Cross-Attention Fusion of Visual and Geometric Features for Large Vocabulary Arabic Lipreading0
Computation and Parameter Efficient Multi-Modal Fusion Transformer for Cued Speech Recognition0
Exploring Lip Segmentation Techniques in Computer Vision: A Comparative Analysis0
DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D Facial AnimationCode0
Learning Separable Hidden Unit Contributions for Speaker-Adaptive Lip-ReadingCode0
End-to-End Lip Reading in Romanian with Cross-Lingual Domain Adaptation and Lateral Inhibition0
Lip Reading for Low-resource Languages by Learning and Combining General Speech Knowledge and Language-specific Knowledge0
Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping0
Leveraging Visemes for Better Visual Speech Representation and Lip Reading0
Emotional Speech-Driven Animation with Content-Emotion Disentanglement0
A Novel Interpretable and Generalizable Re-synchronization Model for Cued Speech based on a Multi-Cuer CorpusCode0
Deep Learning-based Spatio Temporal Facial Feature Visual Speech Recognition0
PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors0
Word-level Persian Lipreading Dataset0
SynthVSR: Scaling Up Visual Speech Recognition With Synthetic Supervision0
A large-scale multimodal dataset of human speech recognition0
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices0
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset0
Speech Driven Video Editing via an Audio-Conditioned Diffusion Model0
Audio-visual video face hallucination with frequency supervision and cross modality support by speech based lip reading loss0
Lip Sync Matters: A Novel Multimodal Forgery DetectorCode0
Streaming Audio-Visual Speech Recognition with Alignment Regularization0
Audio-Visual Speech Enhancement and Separation by Utilizing Multi-Modal Self-Supervised Embeddings0
Clean Text and Full-Body Transformer: Microsoft's Submission to the WMT22 Shared Task on Sign Language Translation0
A Novel Frame Structure for Cloud-Based Audio-Visual Speech Enhancement in Multimodal Hearing-aids0
VCSE: Time-Domain Visual-Contextual Speaker Extraction Network0
Relaxed Attention for Transformer ModelsCode0
Visual Speech Recognition in a Driver Assistance System0
Towards MOOCs for Lipreading: Using Synthetic Talking Heads to Train Humans in Lipreading at Scale0
Speaker-adaptive Lip Reading with User-dependent PaddingCode0
Lip-Listening: Mixing Senses to Understand Lips using Cross Modality Knowledge Distillation for Word-Based Models0
Learning Speaker-specific Lip-to-Speech Generation0
RUSAVIC Corpus: Russian Audio-Visual Speech in Cars0
Expression-preserving face frontalization improves visually assisted speech processing0
A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning0
Multi-Grained Spatio-Temporal Features Perceived Network for Event-Based Lip-Reading0
LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading0
Audio-Visual Synchronisation in the wild0
Contrastive Learning of Global and Local Video Representations0
Leveraging Uni-Modal Self-Supervised Learning for Multimodal Audio-visual Speech Recognition0
Advances and Challenges in Deep Lip Reading0
Sub-word Level Lip Reading With Visual Attention0
Perception Point: Identifying Critical Learning Periods in Speech for Bilingual Networks0
Audio-Visual Speech Recognition is Worth 32328 Voxels0
LRWR: Large-Scale Benchmark for Lip Reading in Russian language0
SimulLR: Simultaneous Lip Reading Transducer with Attention-Guided Adaptive Memory0
Adaptive Semantic-Spatio-Temporal Graph Convolutional Network for Lip Reading0
Spatio-Temporal Attention Mechanism and Knowledge Distillation for Lip Reading0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Lip2WavWER14.08Unverified
#ModelMetricClaimedVerifiedStatus
1Lip2WavWER34.2Unverified
#ModelMetricClaimedVerifiedStatus
1Lip2WavWER31.26Unverified