The IBM 2016 English Conversational Telephone Speech Recognition System
2016-04-27Unverified0· sign in to hype
George Saon, Tom Sercu, Steven Rennie, Hong-Kwang J. Kuo
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We describe a collection of acoustic and language modeling techniques that lowered the word error rate of our English conversational telephone LVCSR system to a record 6.6% on the Switchboard subset of the Hub5 2000 evaluation testset. On the acoustic side, we use a score fusion of three strong models: recurrent nets with maxout activations, very deep convolutional nets with 3x3 kernels, and bidirectional long short-term memory nets which operate on FMLLR and i-vector features. On the language modeling side, we use an updated model "M" and hierarchical neural network LMs.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| swb_hub_500 WER fullSWBCH | RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model | Percentage error | 12.2 | — | Unverified |
| Switchboard + Hub500 | RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model | Percentage error | 6.6 | — | Unverified |
| Switchboard + Hub500 | IBM 2016 | Percentage error | 6.9 | — | Unverified |