SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 5175 of 3012 papers

TitleStatusHype
DiCoW: Diarization-Conditioned Whisper for Target Speaker Automatic Speech RecognitionCode2
CleanMel: Mel-Spectrogram Enhancement for Improving Both Speech Quality and ASRCode2
emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface ElectromyographyCode2
Dialectal Coverage And Generalization in Arabic Speech RecognitionCode2
Large Language Models are Efficient Learners of Noise-Robust Speech RecognitionCode2
Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy SpeechCode1
Confidence Estimation for Attention-based Sequence-to-sequence Models for Speech RecognitionCode1
Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMICode1
Continuous speech separation: dataset and analysisCode1
Combining Frame-Synchronous and Label-Synchronous Systems for Speech RecognitionCode1
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural NetworksCode1
ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global ContextCode1
Common Voice: A Massively-Multilingual Speech CorpusCode1
ClovaCall: Korean Goal-Oriented Dialog Speech Corpus for Automatic Speech Recognition of Contact CentersCode1
Framework for Curating Speech Datasets and Evaluating ASR Systems: A Case Study for PolishCode1
Complex Dynamic Neurons Improved Spiking Transformer Network for Efficient Automatic Speech RecognitionCode1
Controlling Whisper: Universal Acoustic Adversarial Attacks to Control Speech Foundation ModelsCode1
Can we use Common Voice to train a Multi-Speaker TTS system?Code1
Can Contextual Biasing Remain Effective with Whisper and GPT-2?Code1
CCC-wav2vec 2.0: Clustering aided Cross Contrastive Self-supervised learning of speech representationsCode1
Brazilian Portuguese Speech Recognition Using Wav2vec 2.0Code1
Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimationCode1
BembaSpeech: A Speech Recognition Corpus for the Bemba LanguageCode1
BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG dataCode1
Back Translation for Speech-to-text Translation Without TranscriptsCode1
Show:102550
← PrevPage 3 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified