SOTAVerified

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

2023-05-18Code Available0· sign in to hype

Zhifu Gao, Zerui Li, JiaMing Wang, Haoneng Luo, Xian Shi, Mengzhe Chen, Yabin Li, Lingyun Zuo, Zhihao Du, Zhangyu Xiao, Shiliang Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces FunASR, an open-source speech recognition toolkit designed to bridge the gap between academic research and industrial applications. FunASR offers models trained on large-scale industrial corpora and the ability to deploy them in applications. The toolkit's flagship model, Paraformer, is a non-autoregressive end-to-end speech recognition model that has been trained on a manually annotated Mandarin speech recognition dataset that contains 60,000 hours of speech. To improve the performance of Paraformer, we have added timestamp prediction and hotword customization capabilities to the standard Paraformer backbone. In addition, to facilitate model deployment, we have open-sourced a voice activity detection model based on the Feedforward Sequential Memory Network (FSMN-VAD) and a text post-processing punctuation model based on the controllable time-delay Transformer (CT-Transformer), both of which were trained on industrial corpora. These functional modules provide a solid foundation for building high-precision long audio speech recognition services. Compared to other models trained on open datasets, Paraformer demonstrates superior performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AISHELL-1Paraformer-largeWord Error Rate (WER)1.95Unverified
AISHELL-1ParaformerWord Error Rate (WER)4.95Unverified
AISHELL-2Paraformer-largeWord Error Rate (WER)2.85Unverified
AISHELL-2ParaformerWord Error Rate (WER)5.73Unverified
WenetSpeechParaformer-largeCharacter Error Rate (CER)6.97Unverified

Reproductions