SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 101125 of 3012 papers

TitleStatusHype
Deep Contextualized Acoustic Representations For Semi-Supervised Speech RecognitionCode1
Automatic Speech Recognition Benchmark for Air-Traffic CommunicationsCode1
Automatic Speech Recognition in Sanskrit: A New Speech Corpus and Modelling InsightsCode1
Automatic speech recognition for the Nepali language using CNN, bidirectional LSTM and ResNetCode1
A Variance-Preserving Interpolation Approach for Diffusion Models with Applications to Single Channel Speech Enhancement and RecognitionCode1
Adaptation of Whisper models to child speech recognitionCode1
AV Taris: Online Audio-Visual Speech RecognitionCode1
Back Translation for Speech-to-text Translation Without TranscriptsCode1
Adapting End-to-End Speech Recognition for Readable SubtitlesCode1
DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question AnsweringCode1
Deep Sparse Conformer for Speech RecognitionCode1
BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG dataCode1
A Sidecar Separator Can Convert a Single-Talker Speech Recognition System to a Multi-Talker OneCode1
ASR data augmentation in low-resource settings using cross-lingual multi-speaker TTS and cross-lingual voice conversionCode1
End-to-end Audio-visual Speech Recognition with ConformersCode1
End-to-End Automatic Speech Recognition for GujaratiCode1
End-to-End Speech Recognition and Disfluency RemovalCode1
End-to-End Speech Recognition from Federated Acoustic ModelsCode1
Can Contextual Biasing Remain Effective with Whisper and GPT-2?Code1
Brazilian Portuguese Speech Recognition Using Wav2vec 2.0Code1
Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimationCode1
Evolutionary Prompt Design for LLM-Based Post-ASR Error CorrectionCode1
data2vec-aqc: Search for the right Teaching Assistant in the Teacher-Student training setupCode1
Can we use Common Voice to train a Multi-Speaker TTS system?Code1
ArTST: Arabic Text and Speech TransformerCode1
Show:102550
← PrevPage 5 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified