SOTAVerified

Demystifying Learning of Unsupervised Neural Machine Translation

2021-01-01Unverified0· sign in to hype

Guanlin Li, Lemao Liu, Taro Watanabe, Conghui Zhu, Tiejun Zhao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Unsupervised Neural Machine Translation or UNMT has received great attention in recent years. Though tremendous empirical improvements have been achieved, there still lacks theory-oriented investigation and thus some fundamental questions like why certain training protocol can work or not under what circumstances have not yet been well understood. This paper attempts to provide theoretical insights for the above questions. Specifically, following the methodology of comparative study, we leverage two perspectives, i) marginal likelihood maximization and ii) mutual information from information theory, to understand the different learning effects from the standard training protocol and its variants. Our detailed analyses reveal several critical conditions for the successful training of UNMT.

Tasks

Reproductions