Quantifying the Plausibility of Context Reliance in Neural Machine Translation
Gabriele Sarti, Grzegorz Chrupała, Malvina Nissim, Arianna Bisazza
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/inseq-team/inseqOfficialIn paperpytorch★ 462
- github.com/gsarti/pecoreOfficialIn paperpytorch★ 15
- github.com/rachtibat/lrp-for-transformerspytorch★ 227
- github.com/rachtibat/lrp-explains-transformerspytorch★ 227
Abstract
Establishing whether language models can use contextual information in a human-plausible way is important to ensure their trustworthiness in real-world settings. However, the questions of when and which parts of the context affect model generations are typically tackled separately, with current plausibility evaluations being practically limited to a handful of artificial benchmarks. To address this, we introduce Plausibility Evaluation of Context Reliance (PECoRe), an end-to-end interpretability framework designed to quantify context usage in language models' generations. Our approach leverages model internals to (i) contrastively identify context-sensitive target tokens in generated texts and (ii) link them to contextual cues justifying their prediction. We use to quantify the plausibility of context-aware machine translation models, comparing model rationales with human annotations across several discourse-level phenomena. Finally, we apply our method to unannotated model translations to identify context-mediated predictions and highlight instances of (im)plausible context usage throughout generation.