SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 276300 of 3012 papers

TitleStatusHype
Unsupervised pretraining transfers well across languagesCode1
Continuous speech separation: dataset and analysisCode1
Common Voice: A Massively-Multilingual Speech CorpusCode1
Deep Contextualized Acoustic Representations For Semi-Supervised Speech RecognitionCode1
Espresso: A Fast End-to-end Neural Speech Recognition ToolkitCode1
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data AugmentationCode1
SpecAugment: A Simple Data Augmentation Method for Automatic Speech RecognitionCode1
Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain AdaptationCode1
How2: A Large-scale Dataset for Multimodal Language UnderstandingCode1
Deep Audio-Visual Speech RecognitionCode1
Attention-based Audio-Visual Fusion for Robust Automatic Speech RecognitionCode1
Open Source Automatic Speech Recognition for GermanCode1
Zero-shot keyword spotting for visual speech recognition in-the-wildCode1
Word Error Rate Estimation for Speech Recognition: e-WERCode1
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfacesCode1
Speech Commands: A Dataset for Limited-Vocabulary Speech RecognitionCode1
Attentive Sequence-to-Sequence Learning for Diacritic Restoration of Yorùbá Language TextCode1
State-of-the-art Speech Recognition With Sequence-to-Sequence ModelsCode1
Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence ModelsCode1
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural NetworksCode1
Single-Channel Multi-Speaker Separation using Deep ClusteringCode1
NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech0
WhisperKit: On-device Real-time ASR with Billion-Scale Transformers0
Lightweight Target-Speaker-Based Overlap Transcription for Practical Streaming ASR0
AI-Generated Song Detection via Lyrics TranscriptsCode0
Show:102550
← PrevPage 12 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified