SOTAVerified

Identifying Intervenable and Interpretable Features via Orthogonality Regularization

2026-02-04Code Available0· sign in to hype

Moritz Miller, Florent Draye, Bernhard Schölkopf

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

With recent progress on fine-tuning language models around a fixed sparse autoencoder, we disentangle the decoder matrix into almost orthogonal features. This reduces interference and superposition between the features, while keeping performance on the target dataset essentially unchanged. Our orthogonality penalty leads to identifiable features, ensuring the uniqueness of the decomposition. Further, we find that the distance between embedded feature explanations increases with stricter orthogonality penalty, a desirable property for interpretability. Invoking the Independent Causal Mechanisms principle, we argue that orthogonality promotes modular representations amenable to causal intervention. We empirically show that these increasingly orthogonalized features allow for isolated interventions. Our code is available under https://github.com/mrtzmllr/sae-icm.

Reproductions