SOTAVerified

Evaluating Deep Taylor Decomposition for Reliability Assessment in the Wild

2022-05-03Code Available0· sign in to hype

Stephanie Brandl, Daniel Hershcovich, Anders Søgaard

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We argue that we need to evaluate model interpretability methods 'in the wild', i.e., in situations where professionals make critical decisions, and models can potentially assist them. We present an in-the-wild evaluation of token attribution based on Deep Taylor Decomposition, with professional journalists performing reliability assessments. We find that using this method in conjunction with RoBERTa-Large, fine-tuned on the Gossip Corpus, led to faster and better human decision-making, as well as a more critical attitude toward news sources among the journalists. We present a comparison of human and model rationales, as well as a qualitative analysis of the journalists' experiences with machine-in-the-loop decision making.

Tasks

Reproductions