SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 5175 of 100 papers

TitleStatusHype
SlideAVSR: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition0
Spatio-Temporal Attention Mechanism and Knowledge Distillation for Lip Reading0
ES3: Evolving Self-Supervised Learning of Robust Audio-Visual Speech Representations0
Fusing information streams in end-to-end audio-visual speech recognition0
Streaming Audio-Visual Speech Recognition with Alignment Regularization0
SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer0
Towards Lipreading Sentences with Active Appearance Models0
Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video0
MLCA-AVSR: Multi-Layer Cross Attention Fusion based Audio-Visual Speech Recognition0
Uncovering the Visual Contribution in Audio-Visual Speech Recognition0
Modality Attention for End-to-End Audio-visual Speech Recognition0
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition0
MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization0
Multilingual Audio-Visual Speech Recognition with Hybrid CTC/RNN-T Fast Conformer0
Multimodal Machine Learning: Integrating Language, Vision and Speech0
VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning0
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset0
ViCocktail: Automated Multi-Modal Data Collection for Vietnamese Audio-Visual Speech Recognition0
Visual-Aware Speech Recognition for Noisy Scenarios0
Part-based Lipreading for Audio-Visual Speech Recognition0
Adapter-Based Multi-Agent AVSR Extension for Pre-Trained ASR Models0
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective0
Recent Progress in the CUHK Dysarthric Speech Recognition System0
Recognition of Isolated Words using Zernike and MFCC features for Audio Visual Speech Recognition0
ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement0
Show:102550
← PrevPage 3 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified