SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 51100 of 100 papers

TitleStatusHype
Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video0
MLCA-AVSR: Multi-Layer Cross Attention Fusion based Audio-Visual Speech Recognition0
Uncovering the Visual Contribution in Audio-Visual Speech Recognition0
Modality Attention for End-to-End Audio-visual Speech Recognition0
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition0
MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization0
Multilingual Audio-Visual Speech Recognition with Hybrid CTC/RNN-T Fast Conformer0
Multimodal Machine Learning: Integrating Language, Vision and Speech0
VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning0
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset0
ViCocktail: Automated Multi-Modal Data Collection for Vietnamese Audio-Visual Speech Recognition0
Visual-Aware Speech Recognition for Noisy Scenarios0
Part-based Lipreading for Audio-Visual Speech Recognition0
Adapter-Based Multi-Agent AVSR Extension for Pre-Trained ASR Models0
Quantitative Analysis of Audio-Visual Tasks: An Information-Theoretic Perspective0
Recent Progress in the CUHK Dysarthric Speech Recognition System0
Recognition of Isolated Words using Zernike and MFCC features for Audio Visual Speech Recognition0
ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement0
ReVISE: Self-Supervised Speech Resynthesis With Visual Input for Universal and Generalized Speech Regeneration0
Audio-Visual Speech Recognition is Worth 32328 Voxels0
Audio Visual Speech Recognition using Deep Recurrent Neural Networks0
Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture0
Auxiliary Multimodal LSTM for Audio-visual Speech Recognition and Lipreading0
AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition0
AV-data2vec: Self-supervised Learning of Audio-Visual Speech Representations with Contextualized Target Representations0
Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs0
Building a synchronous corpus of acoustic and 3D facial marker data for adaptive audio-visual speech synthesis0
Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides0
RUSAVIC Corpus: Russian Audio-Visual Speech in Cars0
Cocktail-Party Audio-Visual Speech Recognition0
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices0
Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach0
DCIM-AVSR : Efficient Audio-Visual Speech Recognition via Dual Conformer Interaction Module0
Visual Speech Recognition0
Deep Multimodal Learning for Audio-Visual Speech Recognition0
Deep Multimodal Representation Learning from Temporal Data0
Detecting Adversarial Attacks On Audiovisual Speech Recognition0
SlideAVSR: A Dataset of Paper Explanation Videos for Audio-Visual Speech Recognition0
Spatio-Temporal Attention Mechanism and Knowledge Distillation for Lip Reading0
ES3: Evolving Self-Supervised Learning of Robust Audio-Visual Speech Representations0
Fusing information streams in end-to-end audio-visual speech recognition0
Streaming Audio-Visual Speech Recognition with Alignment Regularization0
SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer0
SUTAV: A Turkish Audio-Visual Database0
Investigating the Lombard Effect Influence on End-to-End Audio-Visual Speech Recognition0
XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception0
The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction0
Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands0
Audio-visual Recognition of Overlapped speech for the LRS2 dataset0
Large-vocabulary Audio-visual Speech Recognition in Noisy Environments0
Show:102550
← PrevPage 2 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified