SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 151175 of 3012 papers

TitleStatusHype
Let SSMs be ConvNets: State-space Modeling with Optimal Tensor ContractionsCode0
Investigation of Whisper ASR Hallucinations Induced by Non-Speech Audio0
GEC-RAG: Improving Generative Error Correction via Retrieval-Augmented Generation for Automatic Speech Recognition Systems0
A Benchmark of French ASR Systems Based on Error Severity0
Unsupervised Rhythm and Voice Conversion of Dysarthric to Healthy Speech for ASR0
Adapting Whisper for Regional Dialects: Enhancing Public Services for Vulnerable Populations in the United Kingdom0
persoDA: Personalized Data Augmentation for Personalized ASR0
Selective Attention Merging for low resource tasks: A case study of Child ASRCode0
AdaCS: Adaptive Normalization for Enhanced Code-Switching ASRCode0
Speech Recognition for Automatically Assessing Afrikaans and isiXhosa Preschool Oral Narratives0
Discrete Speech Unit Extraction via Independent Component AnalysisCode0
Benchmarking Rotary Position Embeddings for Automatic Speech Recognition0
Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI0
Comparing Self-Supervised Learning Models Pre-Trained on Human Speech and Animal Vocalizations for Bioacoustics ProcessingCode0
Universal-2-TF: Robust All-Neural Text Formatting for ASR0
Samba-ASR: State-Of-The-Art Speech Recognition Leveraging Structured State-Space Models0
Listening and Seeing Again: Generative Error Correction for Audio-Visual Speech RecognitionCode0
Improving Transducer-Based Spoken Language Understanding with Self-Conditioned CTC and Knowledge Transfer0
Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal ModelsCode0
LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale0
DiCoW: Diarization-Conditioned Whisper for Target Speaker Automatic Speech RecognitionCode2
Zero-resource Speech Translation and Recognition with LLMs0
UME: Upcycling Mixture-of-Experts for Scalable and Efficient Automatic Speech Recognition0
Enhancing Multilingual ASR for Unseen Languages via Language Embedding Modeling0
Transducer-Llama: Integrating LLMs into Streamable Transducer-based Speech Recognition0
Show:102550
← PrevPage 7 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified