SOTAVerified

Speech Recognition

Speech Recognition is the task of converting spoken language into text. It involves recognizing the words spoken in an audio recording and transcribing them into a written format. The goal is to accurately transcribe the speech in real-time or from recorded audio, taking into account factors such as accents, speaking speed, and background noise.

( Image credit: SpecAugment )

Papers

Showing 401450 of 6433 papers

TitleStatusHype
Joint Masked CPC and CTC Training for ASRCode1
Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear MapsCode1
CB-Conformer: Contextual biasing Conformer for biased word recognitionCode1
Knowledge Distillation from BERT Transformer to Speech Transformer for Intent ClassificationCode1
Kosp2e: Korean Speech to English Translation CorpusCode1
KoSpeech: Open-Source Toolkit for End-to-End Korean Speech RecognitionCode1
CI-AVSR: A Cantonese Audio-Visual Speech Datasetfor In-car Command RecognitionCode1
Advancing Test-Time Adaptation in Wild Acoustic Test SettingsCode1
Can we use Common Voice to train a Multi-Speaker TTS system?Code1
CAPE: Encoding Relative Positions with Continuous Augmented Positional EmbeddingsCode1
CIF: Continuous Integrate-and-Fire for End-to-End Speech RecognitionCode1
Late reverberation suppression using U-netsCode1
Adaptation of Whisper models to child speech recognitionCode1
Layer-wise Analysis of a Self-supervised Speech Representation ModelCode1
Framework for Curating Speech Datasets and Evaluating ASR Systems: A Case Study for PolishCode1
Learning Multi-modal Representations by Watching Hundreds of Surgical Video LecturesCode1
Learning to Detect Noisy Labels Using Model-Based FeaturesCode1
Learning to Rank Microphones for Distant Speech RecognitionCode1
Less Peaky and More Accurate CTC Forced Alignment by Label PriorsCode1
Calibrating Transformers via Sparse Gaussian ProcessesCode1
Low-Latency Speech Separation Guided Diarization for Telephone ConversationsCode1
Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech RecognitionCode1
Adapting End-to-End Speech Recognition for Readable SubtitlesCode1
A context-aware knowledge transferring strategy for CTC-based ASRCode1
Can Contextual Biasing Remain Effective with Whisper and GPT-2?Code1
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERTCode1
Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimationCode1
Bridging the Granularity Gap for Acoustic ModelingCode1
Byakto Speech: Real-time long speech synthesis with convolutional neural network: Transfer learning from English to BanglaCode1
Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech RecognitionCode1
Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech RecognitionCode1
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech ProcessingCode1
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data AugmentationCode1
Computer-Generated Music for Tabletop Role-Playing GamesCode1
Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network LatencyCode1
MathSpeech: Leveraging Small LMs for Accurate Conversion in Mathematical Speech-to-FormulaCode1
MediaSpeech: Multilanguage ASR Benchmark and DatasetCode1
MelHuBERT: A simplified HuBERT on Mel spectrogramsCode1
BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment of Continuation WritingCode1
Meta-Transfer Learning for Code-Switched Speech RecognitionCode1
Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence ModelsCode1
MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech RecognitionCode1
BrainBERT: Self-supervised representation learning for intracranial recordingsCode1
Monotonic Chunkwise AttentionCode1
BIG-C: a Multimodal Multi-Purpose Dataset for BembaCode1
Beyond Performance Plateaus: A Comprehensive Study on Scalability in Speech EnhancementCode1
BembaSpeech: A Speech Recognition Corpus for the Bemba LanguageCode1
BASPRO: a balanced script producer for speech corpus collection based on the genetic algorithmCode1
BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG dataCode1
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense EvaluationCode1
Show:102550
← PrevPage 9 of 129Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1AmNetWord Error Rate (WER)8.6Unverified
2HMM-(SAT)GMMWord Error Rate (WER)8Unverified
3Local Prior Matching (Large Model)Word Error Rate (WER)7.19Unverified
4SnipsWord Error Rate (WER)6.4Unverified
5Li-GRUWord Error Rate (WER)6.2Unverified
6HMM-DNN + pNorm*Word Error Rate (WER)5.5Unverified
7CTC + policy learningWord Error Rate (WER)5.42Unverified
8Deep Speech 2Word Error Rate (WER)5.33Unverified
9HMM-TDNN + iVectorsWord Error Rate (WER)4.8Unverified
10Gated ConvNetsWord Error Rate (WER)4.8Unverified
#ModelMetricClaimedVerifiedStatus
1Local Prior Matching (Large Model)Word Error Rate (WER)20.84Unverified
2SnipsWord Error Rate (WER)16.5Unverified
3Local Prior Matching (Large Model, ConvLM LM)Word Error Rate (WER)15.28Unverified
4Deep Speech 2Word Error Rate (WER)13.25Unverified
5TDNN + pNorm + speed up/down speechWord Error Rate (WER)12.5Unverified
6CTC-CRF 4gram-LMWord Error Rate (WER)10.65Unverified
7Convolutional Speech RecognitionWord Error Rate (WER)10.47Unverified
8MT4SSLWord Error Rate (WER)9.6Unverified
9Jasper DR 10x5Word Error Rate (WER)8.79Unverified
10EspressoWord Error Rate (WER)8.7Unverified
#ModelMetricClaimedVerifiedStatus
1Deep SpeechPercentage error20Unverified
2DNN-HMMPercentage error18.5Unverified
3CD-DNNPercentage error16.1Unverified
4DNNPercentage error16Unverified
5DNN + DropoutPercentage error15Unverified
6DNN BMMIPercentage error12.9Unverified
7DNN MPEPercentage error12.9Unverified
8DNN MMIPercentage error12.9Unverified
9HMM-TDNN + pNorm + speed up/down speechPercentage error12.9Unverified
10HMM-DNN +sMBRPercentage error12.6Unverified
#ModelMetricClaimedVerifiedStatus
1LSNNPercentage error33.2Unverified
2LAS multitask with indicators samplingPercentage error20.4Unverified
3Soft Monotonic Attention (ours, offline)Percentage error20.1Unverified
4QCNN-10L-256FMPercentage error19.64Unverified
5Bi-LSTM + skip connections w/ CTCPercentage error17.7Unverified
6Bi-RNN + AttentionPercentage error17.6Unverified
7RNN-CRF on 24(x3) MFSCPercentage error17.3Unverified
8CNN in time and frequency + dropout, 17.6% w/o dropoutPercentage error16.7Unverified
9Light Gated Recurrent UnitsPercentage error16.7Unverified
10GRUPercentage error16.6Unverified
#ModelMetricClaimedVerifiedStatus
1AttWord Error Rate (WER)18.7Unverified
2CTC/AttWord Error Rate (WER)6.7Unverified
3BRA-EWord Error Rate (WER)6.63Unverified
4CTC-CRF 4gram-LMWord Error Rate (WER)6.34Unverified
5BATWord Error Rate (WER)4.97Unverified
6ParaformerWord Error Rate (WER)4.95Unverified
7U2Word Error Rate (WER)4.72Unverified
8UMAWord Error Rate (WER)4.7Unverified
9Lightweight TransducerWord Error Rate (WER)4.31Unverified
10CIF-HKD With LMWord Error Rate (WER)4.1Unverified
#ModelMetricClaimedVerifiedStatus
1Jasper 10x3Word Error Rate (WER)6.9Unverified
2CNN over RAW speech (wav)Word Error Rate (WER)5.6Unverified
3CTC-CRF 4gram-LMWord Error Rate (WER)3.79Unverified
4Deep Speech 2Word Error Rate (WER)3.6Unverified
5test-set on open vocabulary (i.e. harder), model = HMM-DNN + pNorm*Word Error Rate (WER)3.6Unverified
6Convolutional Speech RecognitionWord Error Rate (WER)3.5Unverified
7TC-DNN-BLSTM-DNNWord Error Rate (WER)3.5Unverified
8EspressoWord Error Rate (WER)3.4Unverified
9CTC-CRF VGG-BLSTMWord Error Rate (WER)3.2Unverified
10Transformer with Relaxed AttentionWord Error Rate (WER)3.19Unverified