SOTAVerified

LexicalAT: Lexical-Based Adversarial Reinforcement Training for Robust Sentiment Classification

2019-11-01IJCNLP 2019Unverified0· sign in to hype

Jingjing Xu, Liang Zhao, Hanqi Yan, Qi Zeng, Yun Liang, Xu sun

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent work has shown that current text classification models are fragile and sensitive to simple perturbations. In this work, we propose a novel adversarial training approach, LexicalAT, to improve the robustness of current classification models. The proposed approach consists of a generator and a classifier. The generator learns to generate examples to attack the classifier while the classifier learns to defend these attacks. Considering the diversity of attacks, the generator uses a large-scale lexical knowledge base, WordNet, to generate attacking examples by replacing some words in training examples with their synonyms (e.g., sad and unhappy), neighbor words (e.g., fox and wolf), or super-superior words (e.g., chair and armchair). Due to the discrete generation step in the generator, we use policy gradient, a reinforcement learning approach, to train the two modules. Experiments show LexicalAT outperforms strong baselines and reduces test errors on various neural networks, including CNN, RNN, and BERT.

Tasks

Reproductions