SOTAVerified

Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships

2024-07-17Code Available0· sign in to hype

Angie Boggust, Hyemin Bang, Hendrik Strobelt, Arvind Satyanarayan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While interpretability methods identify a model's learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To assess whether models' have learned human-aligned abstractions, we introduce abstraction alignment, a methodology to compare model behavior against formal human knowledge. Abstraction alignment externalizes domain-specific human knowledge as an abstraction graph, a set of pertinent concepts spanning levels of abstraction. Using the abstraction graph as a ground truth, abstraction alignment measures the alignment of a model's behavior by determining how much of its uncertainty is accounted for by the human abstractions. By aggregating abstraction alignment across entire datasets, users can test alignment hypotheses, such as which human concepts the model has learned and where misalignments recur. In evaluations with experts, abstraction alignment differentiates seemingly similar errors, improves the verbosity of existing model-quality metrics, and uncovers improvements to current human abstractions.

Tasks

Reproductions