SOTAVerified

Crafting Large Language Models for Enhanced Interpretability

2024-07-05Code Available1· sign in to hype

Chung-En Sun, Tuomas Oikarinen, Tsui-Wei Weng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the Concept Bottleneck Large Language Model (CB-LLM), a pioneering approach to creating inherently interpretable Large Language Models (LLMs). Unlike traditional black-box LLMs that rely on post-hoc interpretation methods with limited neuron function insights, CB-LLM sets a new standard with its built-in interpretability, scalability, and ability to provide clear, accurate explanations. This innovation not only advances transparency in language models but also enhances their effectiveness. Our unique Automatic Concept Correction (ACC) strategy successfully narrows the performance gap with conventional black-box LLMs, positioning CB-LLM as a model that combines the high accuracy of traditional LLMs with the added benefit of clear interpretability -- a feature markedly absent in existing LLMs.

Tasks

Reproductions