SOTAVerified

Masked Adversarial Generation for Neural Machine Translation

2021-09-01Unverified0· sign in to hype

Badr Youbi Idrissi, Stéphane Clinchant

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Attacking Neural Machine Translation models is an inherently combinatorial task on discrete sequences, solved with approximate heuristics. Most methods use the gradient to attack the model on each sample independently. Instead of mechanically applying the gradient, could we learn to produce meaningful adversarial attacks ? In contrast to existing approaches, we learn to attack a model by training an adversarial generator based on a language model. We propose the Masked Adversarial Generation (MAG) model, that learns to perturb the translation model throughout the training process. The experiments show that it improves the robustness of machine translation models, while being faster than competing methods.

Tasks

Reproductions