SOTAVerified

Data and Parameter Scaling Laws for Neural Machine Translation

2021-05-01ACL ARR May 2021Unverified0· sign in to hype

Mitchell A Gordon, Kevin Duh, Jared Kaplan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We observe that the development cross-entropy loss of supervised neural machine translation models scales like a power law with the amount of training data and the number of non-embedding parameters in the model. We discuss some practical implications of these results, such as predicting BLEU achieved by large scale models and predicting the ROI of labeling data in low-resource language pairs.

Tasks

Reproductions