SOTAVerified

Speech Recognition

Speech Recognition is the task of converting spoken language into text. It involves recognizing the words spoken in an audio recording and transcribing them into a written format. The goal is to accurately transcribe the speech in real-time or from recorded audio, taking into account factors such as accents, speaking speed, and background noise.

( Image credit: SpecAugment )

Papers

Showing 50515100 of 6433 papers

TitleStatusHype
Warped Language Models for Noise Robust Language Understanding0
Wasserstein Dependency Measure for Representation Learning0
Wav2code: Restore Clean Speech Representations via Codebook Lookup for Noise-Robust ASR0
Wav2Prompt: End-to-End Speech Prompt Generation and Tuning For LLM in Zero and Few-shot Learning0
wav2vec and its current potential to Automatic Speech Recognition in German for the usage in Digital History: A comparative assessment of available ASR-technologies for the use in cultural heritage contexts0
Wav2vec-S: Semi-Supervised Pre-Training for Low-Resource ASR0
Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition0
Wav-BERT: Cooperative Acoustic and Linguistic Representation Learning for Low-Resource Speech Recognition0
WavRAG: Audio-Integrated Retrieval Augmented Generation for Spoken Dialogue Models0
W-CTC: a Connectionist Temporal Classification Loss with Wild Cards0
WCTC-Biasing: Retraining-free Contextual Biasing ASR with Wildcard CTC-based Keyword Spotting and Inter-layer Biasing0
Weak Alignment Supervision from Hybrid Model Improves End-to-end ASR0
Weak-Attention Suppression For Transformer Based Speech Recognition0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
Weakly-Supervised Speech Pre-training: A Case Study on Target Speech Recognition0
Weakly-supervised text-to-speech alignment confidence measure0
kNN For Whisper And Its Effect On Bias And Speaker Adaptation0
Web-style ranking and SLU combination for dialog state tracking0
WebWOZ: A Platform for Designing and Conducting Web-based Wizard of Oz Experiments0
Weight Averaging: A Simple Yet Effective Method to Overcome Catastrophic Forgetting in Automatic Speech Recognition0
Weighted-Sampling Audio Adversarial Example Attack0
Weight Factorization and Centralization for Continual Learning in Speech Recognition0
Weight-importance sparse training in keyword spotting0
WER-BERT: Automatic WER Estimation with BERT in a Balanced Ordinal Classification Paradigm0
WERd: Using Social Text Spelling Variants for Evaluating Dialectal Speech Recognition0
WER we are and WER we think we are0
WER We Stand: Benchmarking Urdu ASR Models0
WEST: Word Encoded Sequence Transducers0
WFST-Based Grapheme-to-Phoneme Conversion: Open Source tools for Alignment, Model-Building and Decoding0
Whale: Large-Scale multilingual ASR model with w2v-BERT and E-Branchformer with large speech data0
End-to-End Whisper to Natural Speech Conversion using Modified Transformer Network0
What Can an Accent Identifier Learn? Probing Phonetic and Prosodic Information in a Wav2vec2-based Accent Identification Model0
What do we need to build explainable AI systems for the medical domain?0
What has LeBenchmark Learnt about French Syntax?0
What is lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations0
What shall we do with an hour of data? Speech recognition for the un- and under-served languages of Common Voice0
When and why are log-linear models self-normalizing?0
When Can Self-Attention Be Replaced by Feed Forward Layers?0
When CTC Training Meets Acoustic Landmarks0
When End-to-End is Overkill: Rethinking Cascaded Speech-to-Text Translation0
Where are we in Named Entity Recognition from Speech?0
Where are we in semantic concept extraction for Spoken Language Understanding?0
Which ASR should I choose for my dialogue system?0
Which French speech recognition system for assistant robots?0
Which phoneme-to-viseme maps best improve visual-only computer lip-reading?0
WhisperD: Dementia Speech Recognition and Filler Word Detection with Whisper0
Whisper Finetuning on Nepali Language0
Whisper in Focus: Enhancing Stuttered Speech Classification with Encoder Layer Optimization0
Whispering in Amharic: Fine-tuning Whisper for Low-resource Language0
Whispering in Norwegian: Navigating Orthographic and Dialectic Challenges0
Show:102550
← PrevPage 102 of 129Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AmNetWord Error Rate (WER)8.6Unverified
2HMM-(SAT)GMMWord Error Rate (WER)8Unverified
3Local Prior Matching (Large Model)Word Error Rate (WER)7.19Unverified
4SnipsWord Error Rate (WER)6.4Unverified
5Li-GRUWord Error Rate (WER)6.2Unverified
6HMM-DNN + pNorm*Word Error Rate (WER)5.5Unverified
7CTC + policy learningWord Error Rate (WER)5.42Unverified
8Deep Speech 2Word Error Rate (WER)5.33Unverified
9Gated ConvNetsWord Error Rate (WER)4.8Unverified
10HMM-TDNN + iVectorsWord Error Rate (WER)4.8Unverified
#ModelMetricClaimedVerifiedStatus
1Local Prior Matching (Large Model)Word Error Rate (WER)20.84Unverified
2SnipsWord Error Rate (WER)16.5Unverified
3Local Prior Matching (Large Model, ConvLM LM)Word Error Rate (WER)15.28Unverified
4Deep Speech 2Word Error Rate (WER)13.25Unverified
5TDNN + pNorm + speed up/down speechWord Error Rate (WER)12.5Unverified
6CTC-CRF 4gram-LMWord Error Rate (WER)10.65Unverified
7Convolutional Speech RecognitionWord Error Rate (WER)10.47Unverified
8MT4SSLWord Error Rate (WER)9.6Unverified
9Jasper DR 10x5Word Error Rate (WER)8.79Unverified
10EspressoWord Error Rate (WER)8.7Unverified
#ModelMetricClaimedVerifiedStatus
1Deep SpeechPercentage error20Unverified
2DNN-HMMPercentage error18.5Unverified
3CD-DNNPercentage error16.1Unverified
4DNNPercentage error16Unverified
5DNN + DropoutPercentage error15Unverified
6DNN BMMIPercentage error12.9Unverified
7HMM-TDNN + pNorm + speed up/down speechPercentage error12.9Unverified
8DNN MPEPercentage error12.9Unverified
9DNN MMIPercentage error12.9Unverified
10CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWBPercentage error12.6Unverified
#ModelMetricClaimedVerifiedStatus
1LSNNPercentage error33.2Unverified
2LAS multitask with indicators samplingPercentage error20.4Unverified
3Soft Monotonic Attention (ours, offline)Percentage error20.1Unverified
4QCNN-10L-256FMPercentage error19.64Unverified
5Bi-LSTM + skip connections w/ CTCPercentage error17.7Unverified
6Bi-RNN + AttentionPercentage error17.6Unverified
7RNN-CRF on 24(x3) MFSCPercentage error17.3Unverified
8CNN in time and frequency + dropout, 17.6% w/o dropoutPercentage error16.7Unverified
9Light Gated Recurrent UnitsPercentage error16.7Unverified
10GRUPercentage error16.6Unverified
#ModelMetricClaimedVerifiedStatus
1AttWord Error Rate (WER)18.7Unverified
2CTC/AttWord Error Rate (WER)6.7Unverified
3BRA-EWord Error Rate (WER)6.63Unverified
4CTC-CRF 4gram-LMWord Error Rate (WER)6.34Unverified
5BATWord Error Rate (WER)4.97Unverified
6ParaformerWord Error Rate (WER)4.95Unverified
7U2Word Error Rate (WER)4.72Unverified
8UMAWord Error Rate (WER)4.7Unverified
9Lightweight TransducerWord Error Rate (WER)4.31Unverified
10CIF-HKD With LMWord Error Rate (WER)4.1Unverified
#ModelMetricClaimedVerifiedStatus
1Jasper 10x3Word Error Rate (WER)6.9Unverified
2CNN over RAW speech (wav)Word Error Rate (WER)5.6Unverified
3CTC-CRF 4gram-LMWord Error Rate (WER)3.79Unverified
4Deep Speech 2Word Error Rate (WER)3.6Unverified
5test-set on open vocabulary (i.e. harder), model = HMM-DNN + pNorm*Word Error Rate (WER)3.6Unverified
6TC-DNN-BLSTM-DNNWord Error Rate (WER)3.5Unverified
7Convolutional Speech RecognitionWord Error Rate (WER)3.5Unverified
8EspressoWord Error Rate (WER)3.4Unverified
9CTC-CRF VGG-BLSTMWord Error Rate (WER)3.2Unverified
10Transformer with Relaxed AttentionWord Error Rate (WER)3.19Unverified