SOTAVerified

Acceptability Judgements via Examining the Topology of Attention Maps

2022-05-19Code Available0· sign in to hype

Daniil Cherniavskii, Eduard Tulchinskii, Vladislav Mikhailov, Irina Proskurina, Laida Kushnareva, Ekaterina Artemova, Serguei Barannikov, Irina Piontkovskaya, Dmitri Piontkovski, Evgeny Burnaev

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the ability of the attention heads to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by 8%-24% on CoLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention maps of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CoLAEn-BERT + TDAAccuracy82.1Unverified
CoLAEn-BERT + TDA + PCAAccuracy88.6Unverified
CoLA DevEn-BERT (pre-trained) + TDAMCC0.42Unverified
CoLA DevEn-BERT + TDAAccuracy88.6Unverified
CoLA DevXLM-R (pre-trained) + TDAAccuracy73Unverified
DaLAJSw-BERT + H0MAccuracy76.9Unverified
ItaCoLAXLM-R + TDAMCC0.68Unverified
ItaCoLAIt-BERT (pre-trained) + TDAMCC0.48Unverified

Reproductions