SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 125 of 100 papers

TitleStatusHype
ViCocktail: Automated Multi-Modal Data Collection for Vietnamese Audio-Visual Speech Recognition0
Cocktail-Party Audio-Visual Speech Recognition0
Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach0
The Multimodal Information Based Speech Processing (MISP) 2025 Challenge: Audio-Visual Diarization and Recognition0
SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer0
CoGenAV: Versatile Audio-Visual Representation Learning via Contrastive-Generative SynchronizationCode2
Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides0
Visual-Aware Speech Recognition for Noisy Scenarios0
MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech TokensCode1
Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs0
Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech RepresentationsCode1
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition0
Audio-Visual Representation Learning via Knowledge Distillation from Speech Foundation ModelsCode1
mWhisper-Flamingo for Multilingual Audio-Visual Noise-Robust Speech RecognitionCode3
Adapter-Based Multi-Agent AVSR Extension for Pre-Trained ASR Models0
Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech RepresentationCode1
Listening and Seeing Again: Generative Error Correction for Audio-Visual Speech RecognitionCode0
Uncovering the Visual Contribution in Audio-Visual Speech Recognition0
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective0
Large Language Models are Strong Audio-Visual Speech Recognition LearnersCode2
DCIM-AVSR : Efficient Audio-Visual Speech Recognition via Dual Conformer Interaction Module0
SynesLM: A Unified Approach for Audio-visual Speech Recognition and Translation via Language Model and Synthetic DataCode0
Tailored Design of Audio-Visual Speech Recognition Models using BranchformersCode1
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech RecognitionCode1
MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization0
Show:102550
← PrevPage 1 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified