SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 21012150 of 3012 papers

TitleStatusHype
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues0
Modeling Concept Dependencies in a Scientific Corpus0
Modeling Confidence in Sequence-to-Sequence Models0
Modeling Dependent Structure for Utterances in ASR Evaluation0
Modeling State-Conditional Observation Distribution using Weighted Stereo Samples for Factorial Speech Processing Models0
Modelling prosodic structure using Artificial Neural Networks0
Modular End-to-end Automatic Speech Recognition Framework for Acoustic-to-word Model0
MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition0
Monaural Multi-Talker Speech Recognition using Factorial Speech Processing Models0
Mondegreen: A Post-Processing Solution to Speech Recognition Error Correction for Voice Search Queries0
Monolingual Recognizers Fusion for Code-switching Speech Recognition0
Monotonic segmental attention for automatic speech recognition0
More Speaking or More Speakers?0
Motivations, challenges, and perspectives for the development of an Automatic Speech Recognition System for the under-resourced Ngiemboon Language0
MSDA: Combining Pseudo-labeling and Self-Supervision for Unsupervised Domain Adaptation in ASR0
MS-HuBERT: Mitigating Pre-training and Inference Mismatch in Masked Language Modelling methods for learning Speech Representations0
MSR-86K: An Evolving, Multilingual Corpus with 86,300 Hours of Transcribed Audio for Speech Recognition Research0
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events0
MTLM: Incorporating Bidirectional Text Information to Enhance Language Model Training in Speech Recognition Systems0
MTL-SLT: Multi-Task Learning for Spoken Language Tasks0
Mu^2SLAM: Multitask, Multilingual Speech and Language Models0
Multi-channel Conversational Speaker Separation via Neural Diarization0
Multi-channel Multi-frame ADL-MVDR for Target Speech Separation0
Multi-Channel Multi-Speaker ASR Using 3D Spatial Feature0
Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment0
Multi-channel Opus compression for far-field automatic speech recognition with a fixed bitrate budget0
Multi-Convformer: Extending Conformer with Multiple Convolution Kernels0
Multi-Dialect Arabic Speech Recognition0
Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition0
Multi-encoder multi-resolution framework for end-to-end speech recognition0
Multi-Geometry Spatial Acoustic Modeling for Distant Speech Recognition0
Multi-Graph Decoding for Code-Switching ASR0
Multi-Level Modeling Units for End-to-End Mandarin Speech Recognition0
Multilingual Contextual Adapters To Improve Custom Word Recognition In Low-resource Languages0
Multilingual End-to-End Speech Recognition with A Single Transformer on Low-Resource Languages0
Multilingual End-to-End Speech Translation0
Multilingual Speech Recognition using Knowledge Transfer across Learning Processes0
Multilingual Speech Recognition With A Single End-To-End Model0
Multilingual Training and Cross-lingual Adaptation on CTC-based Acoustic Model0
Multilingual Transfer Learning for Children Automatic Speech Recognition0
Multimodal and Multiresolution Speech Recognition with Transformers0
Multimodal Attention Merging for Improved Speech Recognition and Audio Event Classification0
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding0
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding0
Multimodal Corpora for Silent Speech Interaction0
Multi-Modal Data Augmentation for End-to-End ASR0
Multimodal Depression Classification Using Articulatory Coordination Features And Hierarchical Attention Based Text Embeddings0
Multi-modal embeddings using multi-task learning for emotion recognition0
Multimodal Punctuation Prediction with Contextual Dropout0
Multimodal Short Video Rumor Detection System Based on Contrastive Learning0
Show:102550
← PrevPage 43 of 61Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified