SOTAVerified

Rethinking Self-Attention: Towards Interpretability in Neural Parsing

2019-11-10Findings of the Association for Computational LinguisticsCode Available0· sign in to hype

Khalil Mrini, Franck Dernoncourt, Quan Tran, Trung Bui, Walter Chang, Ndapa Nakashole

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Attention mechanisms have improved the performance of NLP tasks while allowing models to remain explainable. Self-attention is currently widely used, however interpretability is difficult due to the numerous attention distributions. Recent work has shown that model representations can benefit from label-specific information, while facilitating interpretation of predictions. We introduce the Label Attention Layer: a new form of self-attention where attention heads represent labels. We test our novel layer by running constituency and dependency parsing experiments and show our new model obtains new state-of-the-art results for both tasks on both the Penn Treebank (PTB) and Chinese Treebank. Additionally, our model requires fewer self-attention layers compared to existing work. Finally, we find that the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn TreebankLabel Attention Layer + HPSG + XLNetF1 score96.38Unverified

Reproductions