SOTAVerified

Using Speech Synthesis to Train End-to-End Spoken Language Understanding Models

2019-10-21Code Available2· sign in to hype

Loren Lugosch, Brett Meyer, Derek Nowrouzezahrai, Mirco Ravanelli

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

End-to-end models are an attractive new approach to spoken language understanding (SLU) in which the meaning of an utterance is inferred directly from the raw audio without employing the standard pipeline composed of a separately trained speech recognizer and natural language understanding module. The downside of end-to-end SLU is that in-domain speech data must be recorded to train the model. In this paper, we propose a strategy for overcoming this requirement in which speech synthesis is used to generate a large synthetic training dataset from several artificial speakers. Experiments on two open-source SLU datasets confirm the effectiveness of our approach, both as a sole source of training data and as a form of data augmentation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Snips-SmartLightsReal + syntheticAccuracy (%)71.4Unverified

Reproductions