SOTAVerified

Large Margin Neural Language Model

2018-08-27EMNLP 2018Unverified0· sign in to hype

Jiaji Huang, Yi Li, Wei Ping, Liang Huang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the "good" and "bad" sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.

Tasks

Reproductions