SOTAVerified

The Curious Case of Hallucinations in Neural Machine Translation

2021-04-14NAACL 2021Code Available0· sign in to hype

Vikas Raunak, Arul Menezes, Marcin Junczys-Dowmunt

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we study hallucinations in Neural Machine Translation (NMT), which lie at an extreme end on the spectrum of NMT pathologies. Firstly, we connect the phenomenon of hallucinations under source perturbation to the Long-Tail theory of Feldman (2020), and present an empirically validated hypothesis that explains hallucinations under source perturbation. Secondly, we consider hallucinations under corpus-level noise (without any source perturbation) and demonstrate that two prominent types of natural hallucinations (detached and oscillatory outputs) could be generated and explained through specific corpus-level noise patterns. Finally, we elucidate the phenomenon of hallucination amplification in popular data-generation processes such as Backtranslation and sequence-level Knowledge Distillation.

Tasks

Reproductions