SOTAVerified

Jasper: An End-to-End Convolutional Neural Acoustic Model

2019-04-05Code Available0· sign in to hype

Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M. Cohen, Huyen Nguyen, Ravi Teja Gadde

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Hub5'00 SwitchBoardJasper DR 10x5SwitchBoard7.8Unverified
LibriSpeech test-cleanJasper DR 10x5 (+ Time/Freq Masks)Word Error Rate (WER)2.84Unverified
LibriSpeech test-cleanJasper DR 10x5Word Error Rate (WER)2.95Unverified
LibriSpeech test-otherJasper DR 10x5 (+ Time/Freq Masks)Word Error Rate (WER)7.84Unverified
LibriSpeech test-otherJasper DR 10x5Word Error Rate (WER)8.79Unverified
WSJ eval92Jasper 10x3Word Error Rate (WER)6.9Unverified

Reproductions