SOTAVerified

Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation

2020-06-07WS 2020Unverified0· sign in to hype

El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, Hasan Cavusoglu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multi-checkpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n=10) with a beam width =100. We achieve 37.57 macro F1 with a 6 checkpoint model ensemble on the official English to Portuguese shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F1 and ultimately demonstrating the utility of our intuitive method.

Tasks

Reproductions