SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 26012625 of 3012 papers

TitleStatusHype
Targeted Adversarial Examples for Black Box Audio Systems0
Training Neural Speech Recognition Systems with Synthetic Speech Augmentation0
Exploring Textual and Speech information in Dialogue Act Classification with Speaker Domain Adaptation0
Speech Recognition with Quaternion Neural Networks0
Robust Neural Machine Translation with Joint Textual and Phonetic Embedding0
Listening Comprehension over Argumentative Content0
Using Spoken Word Posterior Features in Neural Machine Translation0
Neural Speech Translation at AppTek0
Acoustic Word Disambiguation with Phonogical Features in Danish ASR0
Research Challenges in Building a Voice-based Artificial Personal Shopper - Position Paper0
Words Worth: Verbal Content and Hirability Impressions in YouTube Video Resumes0
探討聲學模型的合併技術與半監督鑑別式訓練於會議語音辨識之研究 (Investigating acoustic model combination and semi-supervised discriminative training for meeting speech recognition) [In Chinese]0
A Self-Attentive Model with Gate Mechanism for Spoken Language Understanding0
Improving Neural Language Models with Weight Norm Initialization and Regularization0
會議語音辨識使用語者資訊之語言模型調適技術 (On the Use of Speaker-Aware Language Model Adaptation Techniques for Meeting Speech Recognition ) [In Chinese]0
The AFRL IWSLT 2018 Systems: What Worked, What Didn’t0
The Sogou-TIIC Speech Translation System for IWSLT 20180
Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture0
Characterizing Audio Adversarial Examples Using Temporal Dependency0
End-to-End Multi-Lingual Multi-Speaker Speech Recognition0
Hindi-English Code-Switching Speech Corpus0
From Audio to Semantics: Approaches to end-to-end spoken language understanding0
End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent Neural Models0
Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition0
Pre-training on high-resource speech recognition improves low-resource speech-to-text translationCode0
Show:102550
← PrevPage 105 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified