SOTAVerified

Black-box language model explanation by context length probing

2022-12-30Code Available0· sign in to hype

Ondřej Cífka, Antoine Liutkus

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present context length probing, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign differential importance scores to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The source code and an interactive demo of the method are available.

Tasks

Reproductions