SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 15511575 of 3012 papers

TitleStatusHype
Multi-Channel Multi-Speaker ASR Using 3D Spatial Feature0
Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment0
Multi-channel Opus compression for far-field automatic speech recognition with a fixed bitrate budget0
Multi-Convformer: Extending Conformer with Multiple Convolution Kernels0
Multi-Dialect Arabic Speech Recognition0
Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition0
Multi-encoder multi-resolution framework for end-to-end speech recognition0
Multi-Geometry Spatial Acoustic Modeling for Distant Speech Recognition0
Multi-Graph Decoding for Code-Switching ASR0
Multi-Level Modeling Units for End-to-End Mandarin Speech Recognition0
Multilingual Contextual Adapters To Improve Custom Word Recognition In Low-resource Languages0
Multilingual End-to-End Speech Recognition with A Single Transformer on Low-Resource Languages0
Multilingual End-to-End Speech Translation0
Multilingual Speech Recognition using Knowledge Transfer across Learning Processes0
Multilingual Speech Recognition With A Single End-To-End Model0
Multilingual Training and Cross-lingual Adaptation on CTC-based Acoustic Model0
Multilingual Transfer Learning for Children Automatic Speech Recognition0
Multimodal and Multiresolution Speech Recognition with Transformers0
Multimodal Attention Merging for Improved Speech Recognition and Audio Event Classification0
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding0
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding0
Multimodal Corpora for Silent Speech Interaction0
Multi-Modal Data Augmentation for End-to-End ASR0
Multimodal Depression Classification Using Articulatory Coordination Features And Hierarchical Attention Based Text Embeddings0
Multi-modal embeddings using multi-task learning for emotion recognition0
Show:102550
← PrevPage 63 of 121Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified