SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 150 of 100 papers

TitleStatusHype
ViCocktail: Automated Multi-Modal Data Collection for Vietnamese Audio-Visual Speech Recognition0
Cocktail-Party Audio-Visual Speech Recognition0
Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach0
The Multimodal Information Based Speech Processing (MISP) 2025 Challenge: Audio-Visual Diarization and Recognition0
SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer0
CoGenAV: Versatile Audio-Visual Representation Learning via Contrastive-Generative SynchronizationCode2
Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides0
Visual-Aware Speech Recognition for Noisy Scenarios0
MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech TokensCode1
Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs0
Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech RepresentationsCode1
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition0
Audio-Visual Representation Learning via Knowledge Distillation from Speech Foundation ModelsCode1
mWhisper-Flamingo for Multilingual Audio-Visual Noise-Robust Speech RecognitionCode3
Adapter-Based Multi-Agent AVSR Extension for Pre-Trained ASR Models0
Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech RepresentationCode1
Listening and Seeing Again: Generative Error Correction for Audio-Visual Speech RecognitionCode0
Uncovering the Visual Contribution in Audio-Visual Speech Recognition0
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective0
Large Language Models are Strong Audio-Visual Speech Recognition LearnersCode2
DCIM-AVSR : Efficient Audio-Visual Speech Recognition via Dual Conformer Interaction Module0
SynesLM: A Unified Approach for Audio-visual Speech Recognition and Translation via Language Model and Synthetic DataCode0
Tailored Design of Audio-Visual Speech Recognition Models using BranchformersCode1
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech RecognitionCode1
MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization0
Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and TranslationCode3
Audio-Visual Speech Recognition based on Regulated Transformer and Spatio-Temporal Fusion Strategy for Driver Assistive SystemsCode0
Learn2Talk: 3D Talking Face Learns from 2D Talking Face0
XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception0
Multilingual Audio-Visual Speech Recognition with Hybrid CTC/RNN-T Fast Conformer0
A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech RecognitionCode0
It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech RecognitionCode1
SlideAVSR: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition0
Multichannel AV-wav2vec2: A Framework for Learning Multichannel Multi-Modal Speech RepresentationCode0
MLCA-AVSR: Multi-Layer Cross Attention Fusion based Audio-Visual Speech Recognition0
ES3: Evolving Self-Supervised Learning of Robust Audio-Visual Speech Representations0
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech SeparationCode1
AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition0
The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction0
Improving Audio-Visual Speech Recognition by Lip-Subword Correlation Based Visual Pre-training and Cross-Modal Fusion EncoderCode1
Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech RecognitionCode1
MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech RecognitionCode1
OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality AlignmentCode1
MAVD: The First Open Large-Scale Mandarin Audio-Visual Dataset with Depth InformationCode1
Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task GeneralizationCode1
Cross-Modal Global Interaction and Local Alignment for Audio-Visual Speech RecognitionCode1
Auto-AVSR: Audio-Visual Speech Recognition with Automatic LabelsCode2
Watch or Listen: Robust Audio-Visual Speech Recognition with Visual Corruption Modeling and Reliability ScoringCode1
The NPU-ASLP System for Audio-Visual Speech Recognition in MISP 2022 Challenge0
MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-to-Text TranslationCode2
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified