SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 150 of 100 papers

TitleStatusHype
mWhisper-Flamingo for Multilingual Audio-Visual Noise-Robust Speech RecognitionCode3
Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and TranslationCode3
Robust Self-Supervised Audio-Visual Speech RecognitionCode2
CoGenAV: Versatile Audio-Visual Representation Learning via Contrastive-Generative SynchronizationCode2
MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-to-Text TranslationCode2
Large Language Models are Strong Audio-Visual Speech Recognition LearnersCode2
Auto-AVSR: Audio-Visual Speech Recognition with Automatic LabelsCode2
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech SeparationCode1
Audio-Visual Representation Learning via Knowledge Distillation from Speech Foundation ModelsCode1
AV Taris: Online Audio-Visual Speech RecognitionCode1
CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command RecognitionCode1
CI-AVSR: A Cantonese Audio-Visual Speech Datasetfor In-car Command RecognitionCode1
Cross-Modal Global Interaction and Local Alignment for Audio-Visual Speech RecognitionCode1
Deep Audio-Visual Speech RecognitionCode1
Discriminative Multi-modality Speech RecognitionCode1
End-to-end Audio-visual Speech Recognition with ConformersCode1
Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech RecognitionCode1
How to Teach DNNs to Pay Attention to the Visual Modality in Speech RecognitionCode1
Improving Audio-Visual Speech Recognition by Lip-Subword Correlation Based Visual Pre-training and Cross-Modal Fusion EncoderCode1
It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech RecognitionCode1
Jointly Learning Visual and Auditory Speech Representations from Raw DataCode1
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech RecognitionCode1
Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech RecognitionCode1
MAVD: The First Open Large-Scale Mandarin Audio-Visual Dataset with Depth InformationCode1
MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech RecognitionCode1
MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech TokensCode1
Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech RepresentationCode1
OLKAVS: An Open Large-Scale Korean Audio-Visual Speech DatasetCode1
OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality AlignmentCode1
Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task GeneralizationCode1
Should we hard-code the recurrence concept or learn it instead ? Exploring the Transformer architecture for Audio-Visual Speech RecognitionCode1
Tailored Design of Audio-Visual Speech Recognition Models using BranchformersCode1
Visual Context-driven Audio Feature Enhancement for Robust End-to-End Audio-Visual Speech RecognitionCode1
Watch or Listen: Robust Audio-Visual Speech Recognition with Visual Corruption Modeling and Reliability ScoringCode1
Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech RepresentationsCode1
LRS3-TED: a large-scale dataset for visual speech recognitionCode0
Listening and Seeing Again: Generative Error Correction for Audio-Visual Speech RecognitionCode0
A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech RecognitionCode0
Recurrent Neural Network Transducer for Audio-Visual Speech RecognitionCode0
Audio-Visual Speech Recognition based on Regulated Transformer and Spatio-Temporal Fusion Strategy for Driver Assistive SystemsCode0
SynesLM: A Unified Approach for Audio-visual Speech Recognition and Translation via Language Model and Synthetic DataCode0
Multichannel AV-wav2vec2: A Framework for Learning Multichannel Multi-Modal Speech RepresentationCode0
Learn2Talk: 3D Talking Face Learns from 2D Talking Face0
Learning Contextually Fused Audio-visual Representations for Audio-visual Speech Recognition0
The Multimodal Information Based Speech Processing (MISP) 2025 Challenge: Audio-Visual Diarization and Recognition0
Leveraging Modality-specific Representations for Audio-visual Speech Recognition via Reinforcement Learning0
Leveraging Uni-Modal Self-Supervised Learning for Multimodal Audio-visual Speech Recognition0
The NPU-ASLP System for Audio-Visual Speech Recognition in MISP 2022 Challenge0
Lip Graph Assisted Audio-Visual Speech Recognition Using Bidirectional Synchronous Fusion0
Towards Lipreading Sentences with Active Appearance Models0
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified