SOTAVerified

Automatic Speech Recognition (ASR)

Automatic Speech Recognition (ASR) involves converting spoken language into written text. It is designed to transcribe spoken words into text in real-time, allowing people to communicate with computers, mobile devices, and other technology using their voice. The goal of Automatic Speech Recognition is to accurately transcribe speech, taking into account variations in accent, pronunciation, and speaking style, as well as background noise and other factors that can affect speech quality.

Papers

Showing 150 of 3012 papers

TitleStatusHype
GLM-4-Voice: Towards Intelligent and Human-Like End-to-End Spoken ChatbotCode7
Scaling Speech-Text Pre-training with Synthetic Interleaved DataCode7
Qwen2.5-Omni Technical ReportCode7
PaddleSpeech: An Easy-to-Use All-in-One Speech ToolkitCode6
StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task LearningCode5
FireRedASR: Open-Source Industrial-Grade Mandarin Speech Recognition Models from Encoder-Decoder to LLM IntegrationCode5
SpeechColab Leaderboard: An Open-Source Platform for Automatic Speech Recognition EvaluationCode4
VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language ModelCode4
Dolphin: A Large-Scale Automatic Speech Recognition Model for Eastern LanguagesCode4
Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language ModelsCode3
TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptationCode3
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation ModelsCode3
Conformer: Convolution-augmented Transformer for Speech RecognitionCode3
DiarizationLM: Speaker Diarization Post-Processing with Large Language ModelsCode3
MooER: LLM-based Speech Recognition and Translation Models from Moore ThreadsCode3
Sentiment Reasoning for HealthcareCode3
Fast-MD: Fast Multi-Decoder End-to-End Speech Translation with Non-Autoregressive Hidden IntermediatesCode3
Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and TranslationCode3
WhisperNER: Unified Open Named Entity and Speech RecognitionCode3
VoiceBench: Benchmarking LLM-Based Voice AssistantsCode3
Delay-penalized transducer for low-latency streaming ASRCode3
A Parallelizable Lattice Rescoring Strategy with Neural Language ModelsCode3
Voila: Voice-Language Foundation Models for Real-Time Autonomous Interaction and Voice Role-PlayCode3
Towards A Unified Conformer Structure: from ASR to ASV TaskCode2
Streaming Keyword Spotting Boosted by Cross-layer Discrimination ConsistencyCode2
Paralinguistics-Aware Speech-Empowered Large Language Models for Natural ConversationCode2
SoundSpaces 2.0: A Simulation Platform for Visual-Acoustic LearningCode2
An Embarrassingly Simple Approach for LLM with Strong ASR CapacityCode2
AIR-Bench: Benchmarking Large Audio-Language Models via Generative ComprehensionCode2
TEVR: Improving Speech Recognition by Token Entropy Variance ReductionCode2
Recent Advances in Speech Language Models: A SurveyCode2
Robust Self-Supervised Audio-Visual Speech RecognitionCode2
Squeezeformer: An Efficient Transformer for Automatic Speech RecognitionCode2
NusaCrowd: Open Source Initiative for Indonesian NLP ResourcesCode2
LiteASR: Efficient Automatic Speech Recognition with Low-Rank ApproximationCode2
PixIT: Joint Training of Speaker Diarization and Speech Separation from Real-world Multi-speaker RecordingsCode2
Learning Audio-Visual Speech Representation by Masked Multimodal Cluster PredictionCode2
Large Language Model Can Transcribe Speech in Multi-Talker Scenarios with Versatile InstructionsCode2
Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text RecognitionCode2
Pretraining End-to-End Keyword Search with Automatically Discovered Acoustic UnitsCode2
Large Language Models are Efficient Learners of Noise-Robust Speech RecognitionCode2
Large Language Models are Strong Audio-Visual Speech Recognition LearnersCode2
Auto-AVSR: Audio-Visual Speech Recognition with Automatic LabelsCode2
LibriSpeech-PC: Benchmark for Evaluation of Punctuation and Capitalization Capabilities of end-to-end ASR ModelsCode2
emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface ElectromyographyCode2
Dialectal Coverage And Generalization in Arabic Speech RecognitionCode2
DiCoW: Diarization-Conditioned Whisper for Target Speaker Automatic Speech RecognitionCode2
Fast Transformers with Clustered AttentionCode2
4-bit Conformer with Native Quantization Aware Training for Speech RecognitionCode2
CMGAN: Conformer-based Metric GAN for Speech EnhancementCode2
Show:102550
← PrevPage 1 of 61Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1TM-CTCTest WER10.1Unverified
2TM-seq2seqTest WER9.7Unverified
3CTC/attentionTest WER8.2Unverified
4LF-MMI TDNNTest WER6.7Unverified
5Whisper-LLaMATest WER6.6Unverified
6End2end ConformerTest WER3.9Unverified
7End2end ConformerTest WER3.7Unverified
8MoCo + wav2vec (w/o extLM)Test WER2.7Unverified
9CTC/AttentionTest WER1.5Unverified
10WhisperTest WER1.3Unverified
#ModelMetricClaimedVerifiedStatus
1SpatialNetCER14.5Unverified
2CleanMel-L-maskCER14.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConformerTest WER15.32Unverified
2Whisper-largev3-finetunedTest WER10.82Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)1.89Unverified
#ModelMetricClaimedVerifiedStatus
1DistillAVWER1.4Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)4.28Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)8.04Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer TransducerWER (%)3.36Unverified
#ModelMetricClaimedVerifiedStatus
1Conformer Transducer (German)WER (%)8.98Unverified