SOTAVerified

NeurST: Neural Speech Translation Toolkit

2020-12-18ACL 2021Code Available0· sign in to hype

Chengqi Zhao, Mingxuan Wang, Qianqian Dong, Rong Ye, Lei LI

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

NeurST is an open-source toolkit for neural speech translation. The toolkit mainly focuses on end-to-end speech translation, which is easy to use, modify, and extend to advanced speech translation research and products. NeurST aims at facilitating the speech translation research for NLP researchers and building reliable benchmarks for this field. It provides step-by-step recipes for feature extraction, data preprocessing, distributed training, and evaluation. In this paper, we will introduce the framework design of NeurST and show experimental results for different benchmark datasets, which can be regarded as reliable baselines for future research. The toolkit is publicly available at https://github.com/bytedance/neurst/ and we will continuously update the performance of NeurST with other counterparts and studies at https://st-benchmark.github.io/.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
libri-transTransformer + ASR Pretrain + SpecAugCase-insensitive sacreBLEU17.2Unverified
libri-transTransformer + ASR PretrainCase-insensitive sacreBLEU16.5Unverified
MuST-C EN->ESTransformer + ASR Pretrain + SpecAugCase-sensitive sacreBLEU27.4Unverified
MuST-C EN->ESTransformer + ASR PretrainCase-sensitive sacreBLEU26.8Unverified
MuST-C EN->FRTransformer + ASR Pretrain + SpecAugCase-sensitive sacreBLEU33.3Unverified
MuST-C EN->FRTransformer + ASR PretrainCase-sensitive sacreBLEU32.3Unverified

Reproductions