Rethinking Self-Attention: Towards Interpretability in Neural Parsing
Khalil Mrini, Franck Dernoncourt, Quan Tran, Trung Bui, Walter Chang, Ndapa Nakashole
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/KhalilMrini/LAL-ParserOfficialIn paperpytorch★ 0
- github.com/kh8fb/LAL-Parser-Serverpytorch★ 0
Abstract
Attention mechanisms have improved the performance of NLP tasks while allowing models to remain explainable. Self-attention is currently widely used, however interpretability is difficult due to the numerous attention distributions. Recent work has shown that model representations can benefit from label-specific information, while facilitating interpretation of predictions. We introduce the Label Attention Layer: a new form of self-attention where attention heads represent labels. We test our novel layer by running constituency and dependency parsing experiments and show our new model obtains new state-of-the-art results for both tasks on both the Penn Treebank (PTB) and Chinese Treebank. Additionally, our model requires fewer self-attention layers compared to existing work. Finally, we find that the Label Attention heads learn relations between syntactic categories and show pathways to analyze errors.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Penn Treebank | Label Attention Layer + HPSG + XLNet | F1 score | 96.38 | — | Unverified |