SOTAVerified

Semantic Noise Matters for Neural Natural Language Generation

2019-11-10WS 2019Code Available0· sign in to hype

Ondřej Dušek, David M. Howcroft, Verena Rieser

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.

Tasks

Reproductions