SOTAVerified

Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks

2020-05-01Unverified0· sign in to hype

Winston Wu, Dustin Arendt, Svitlana Volkova

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We evaluate machine comprehension models' robustness to noise and adversarial attacks by performing novel perturbations at the character, word, and sentence level. We experiment with different amounts of perturbations to examine model confidence and misclassification rate, and contrast model performance in adversarial training with different embedding types on two benchmark datasets. We demonstrate improving model performance with ensembling. Finally, we analyze factors that effect model behavior under adversarial training and develop a model to predict model errors during adversarial attacks.

Tasks

Reproductions