SOTAVerified

Comparing BERT-based Reward Functions for Deep Reinforcement Learning in Machine Translation

2022-10-01WAT 2022Unverified0· sign in to hype

Yuki Nakatani, Tomoyuki Kajiwara, Takashi Ninomiya

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In text generation tasks such as machine translation, models are generally trained using cross-entropy loss. However, mismatches between the loss function and the evaluation metric are often problematic. It is known that this problem can be addressed by direct optimization to the evaluation metric with reinforcement learning. In machine translation, previous studies have used BLEU to calculate rewards for reinforcement learning, but BLEU is not well correlated with human evaluation. In this study, we investigate the impact on machine translation quality through reinforcement learning based on evaluation metrics that are more highly correlated with human evaluation. Experimental results show that reinforcement learning with BERT-based rewards can improve various evaluation metrics.

Tasks

Reproductions