SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 27512775 of 3012 papers

TitleStatusHype
A Text Normalisation System for Non-Standard English Words0
A Text-to-Speech Pipeline, Evaluation Methodology, and Initial Fine-Tuning Results for Child Speech Synthesis0
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition0
Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks on Automatic Speech Recognition Systems0
Attention-based ASR with Lightweight and Dynamic Convolutions0
Attention based end to end Speech Recognition for Voice Search in Hindi and English0
Attention based on-device streaming speech recognition with large speech corpus0
Attention-based Wav2Text with Feature Transfer Learning0
Attention Enhanced Citrinet for Speech Recognition0
Attentive Adversarial Learning for Domain-Invariant Training0
Attentive listening system with backchanneling, response generation and flexible turn-taking0
A two-stage transliteration approach to improve performance of a multilingual ASR0
A two-step approach to leverage contextual data: speech recognition in air-traffic communications0
Audio Adversarial Examples for Robust Hybrid CTC/Attention Speech Recognition0
Audio-attention discriminative language model for ASR rescoring0
Audio-conditioned phonemic and prosodic annotation for building text-to-speech models from unlabeled speech data0
Audio De-identification: A New Entity Recognition Task0
Audio De-identification - a New Entity Recognition Task0
Audio Enhancement for Computer Audition -- An Iterative Training Paradigm Using Sample Importance0
Audio-visual Multi-channel Integration and Recognition of Overlapped Speech0
Audio-visual Multi-channel Recognition of Overlapped Speech0
Audio-visual multi-channel speech separation, dereverberation and recognition0
Audio-visual Recognition of Overlapped speech for the LRS2 dataset0
Audio-Visual Speech Enhancement and Separation by Utilizing Multi-Modal Self-Supervised Embeddings0
Audio-Visual Speech Recognition is Worth 32328 Voxels0
Show:102550
← PrevPage 111 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified