SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 351375 of 3012 papers

TitleStatusHype
Exploring Generative Error Correction for Dysarthric Speech RecognitionCode0
KIT's Low-resource Speech Translation Systems for IWSLT2025: System Enhancement with Synthetic Data and Model Regularization0
In-context Language Learning for Endangered Languages in Speech Recognition0
Continuous Learning for Children's ASR: Overcoming Catastrophic Forgetting with Elastic Weight Consolidation and Synaptic Intelligence0
Robust fine-tuning of speech recognition models via model merging: application to disordered speech0
CHSER: A Dataset and Case Study on Generative Speech Error Correction for Child ASRCode0
VietASR: Achieving Industry-level Vietnamese ASR with 50-hour labeled data and Large-Scale Speech Pretraining0
LLM-based Generative Error Correction for Rare Words with Synthetic Data and Phonetic ContextCode0
An Effective Training Framework for Light-Weight Automatic Speech Recognition Models0
SoccerChat: Integrating Multimodal Data for Enhanced Soccer Game UnderstandingCode0
Large Language Models based ASR Error Correction for Child Conversations0
In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties0
PersonaTAB: Predicting Personality Traits using Textual, Acoustic, and Behavioral Cues in Fully-Duplex Speech DialogsCode0
Towards Inclusive ASR: Investigating Voice Conversion for Dysarthric Speech Recognition in Low-Resource LanguagesCode0
From Weak Labels to Strong Results: Utilizing 5,000 Hours of Noisy Classroom Transcripts with Minimal Accurate Data0
LegoSLM: Connecting LLM with Speech Encoder using CTC Posteriors0
ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems0
Survey of End-to-End Multi-Speaker Automatic Speech Recognition for Monaural Audio0
Automatic Speech Recognition for African Low-Resource Languages: Challenges and Future Directions0
Multi-Stage Speaker Diarization for Noisy ClassroomsCode0
LipDiffuser: Lip-to-Speech Generation with Conditional Diffusion Models0
Remote Rowhammer Attack using Adversarial Observations on Federated Learning Clients0
Teochew-Wild: The First In-the-wild Teochew Dataset with Orthographic Annotations0
SepALM: Audio Language Models Are Error Correctors for Robust Speech Separation0
Fairness of Automatic Speech Recognition in Cleft Lip and Palate Speech0
Show:102550
← PrevPage 15 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified