SOTAVerified

Robust Training under Linguistic Adversity

2017-04-01EACL 2017Code Available0· sign in to hype

Yitong Li, Trevor Cohn, Timothy Baldwin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.

Tasks

Reproductions