SOTAVerified

Audio-Visual Speech Recognition

Audio-visual speech recognition is the task of transcribing a paired audio and visual stream into text.

Papers

Showing 51100 of 100 papers

TitleStatusHype
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices0
AV-data2vec: Self-supervised Learning of Audio-Visual Speech Representations with Contextualized Target Representations0
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset0
OLKAVS: An Open Large-Scale Korean Audio-Visual Speech DatasetCode1
ReVISE: Self-Supervised Speech Resynthesis With Visual Input for Universal and Generalized Speech Regeneration0
ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement0
Jointly Learning Visual and Auditory Speech Representations from Raw DataCode1
Leveraging Modality-specific Representations for Audio-visual Speech Recognition via Reinforcement Learning0
VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning0
Streaming Audio-Visual Speech Recognition with Alignment Regularization0
Visual Context-driven Audio Feature Enhancement for Robust End-to-End Audio-Visual Speech RecognitionCode1
Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands0
CI-AVSR: A Cantonese Audio-Visual Speech Datasetfor In-car Command RecognitionCode1
RUSAVIC Corpus: Russian Audio-Visual Speech in Cars0
Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech RecognitionCode1
Learning Contextually Fused Audio-visual Representations for Audio-visual Speech Recognition0
Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video0
Recent Progress in the CUHK Dysarthric Speech Recognition System0
CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command RecognitionCode1
Robust Self-Supervised Audio-Visual Speech RecognitionCode2
Leveraging Uni-Modal Self-Supervised Learning for Multimodal Audio-visual Speech Recognition0
Audio-Visual Speech Recognition is Worth 32328 Voxels0
Large-vocabulary Audio-visual Speech Recognition in Noisy Environments0
Spatio-Temporal Attention Mechanism and Knowledge Distillation for Lip Reading0
Fusing information streams in end-to-end audio-visual speech recognition0
End-to-end Audio-visual Speech Recognition with ConformersCode1
Part-based Lipreading for Audio-Visual Speech Recognition0
AV Taris: Online Audio-Visual Speech RecognitionCode1
Lip Graph Assisted Audio-Visual Speech Recognition Using Bidirectional Synchronous Fusion0
Should we hard-code the recurrence concept or learn it instead ? Exploring the Transformer architecture for Audio-Visual Speech RecognitionCode1
Discriminative Multi-modality Speech RecognitionCode1
How to Teach DNNs to Pay Attention to the Visual Modality in Speech RecognitionCode1
Audio-visual Recognition of Overlapped speech for the LRS2 dataset0
Detecting Adversarial Attacks On Audiovisual Speech Recognition0
Recurrent Neural Network Transducer for Audio-Visual Speech RecognitionCode0
Investigating the Lombard Effect Influence on End-to-End Audio-Visual Speech Recognition0
Modality Attention for End-to-End Audio-visual Speech Recognition0
Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture0
Deep Audio-Visual Speech RecognitionCode1
LRS3-TED: a large-scale dataset for visual speech recognitionCode0
Towards Lipreading Sentences with Active Appearance Models0
Multimodal Machine Learning: Integrating Language, Vision and Speech0
Deep Multimodal Representation Learning from Temporal Data0
Auxiliary Multimodal LSTM for Audio-visual Speech Recognition and Lipreading0
Audio Visual Speech Recognition using Deep Recurrent Neural Networks0
Deep Multimodal Learning for Audio-Visual Speech Recognition0
Visual Speech Recognition0
Recognition of Isolated Words using Zernike and MFCC features for Audio Visual Speech Recognition0
Building a synchronous corpus of acoustic and 3D facial marker data for adaptive audio-visual speech synthesis0
SUTAV: A Turkish Audio-Visual Database0
Show:102550
← PrevPage 2 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Hybrid CTC / AttentionWord Error Rate (WER)39.1Unverified
2TM-Seq2seqTest WER8.5Unverified
3TM-CTCTest WER8.2Unverified
4CTC/AttentionTest WER7Unverified
5CTC/AttentionTest WER1.5Unverified
6Whisper-FlamingoTest WER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Hyb-ConformerWord Error Rate (WER)2.3Unverified
2Zero-AVSRWord Error Rate (WER)1.5Unverified
3AV-HuBERT LargeWord Error Rate (WER)1.4Unverified
4Whisper-FlamingoWord Error Rate (WER)0.76Unverified
5MMS-LLaMAWord Error Rate (WER)0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AVCRFormerTop-1 Accuracy98.81Unverified
22DCNN + BiLSTM + ResNet + MLFTop-1 Accuracy98.76Unverified
3PBLTop-1 Accuracy98.3Unverified
#ModelMetricClaimedVerifiedStatus
1ES³ Base*Word Error Rate (WER)11Unverified