SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 18011825 of 3012 papers

TitleStatusHype
VAD-free Streaming Hybrid CTC/Attention ASR for Unsegmented Recording0
Zero-shot Speech Translation0
The IWSLT 2021 BUT Speech Translation Systems0
A Configurable Multilingual Model is All You Need to Recognize All Languages0
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems0
Noisy Training Improves E2E ASR for the Edge0
On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models0
Loss Prediction: End-to-End Active Learning Approach For Speech Recognition0
Improved Language Identification Through Cross-Lingual Self-Supervised Learning0
End-to-End Rich Transcription-Style Automatic Speech Recognition with Semi-Supervised Learning0
Advancing CTC-CRF Based End-to-End Speech Recognition with Wordpieces and Conformers0
A Comparative Study of Modular and Joint Approaches for Speaker-Attributed ASR on Monaural Long-Form Audio0
Instant One-Shot Word-Learning for Context-Specific Neural Sequence-to-Sequence Speech RecognitionCode0
Investigation of Practical Aspects of Single Channel Speech Separation for ASR0
Cross-Modal Transformer-Based Neural Correction Models for Automatic Speech Recognition0
Unified Autoregressive Modeling for Joint End-to-End Multi-Talker Overlapped Speech Recognition and Speaker Attribute Estimation0
Arabic Code-Switching Speech Recognition using Monolingual Data0
Dual Causal/Non-Causal Self-Attention for Streaming End-to-End Speech Recognition0
Multi-user VoiceFilter-Lite via Attentive Speaker Embedding0
StableEmit: Selection Probability Discount for Reducing Emission Latency of Streaming Monotonic Attention ASR0
Word-Free Spoken Language Understanding for Mandarin-Chinese0
Improving Named Entity Recognition in Spoken Dialog Systems by Context and Speech Pattern Modeling0
SmarTerp: A CAI System to Support Simultaneous Interpreters in Real-Time0
Pretext Tasks selection for multitask self-supervised speech representation learningCode0
IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task0
Show:102550
← PrevPage 73 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified