SOTAVerified

Data Science with LLMs and Interpretable Models

2024-02-22Code Available2· sign in to hype

Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans. In this work, we show that large language models (LLMs) are remarkably good at working with interpretable models, too. In particular, we show that LLMs can describe, interpret, and debug Generalized Additive Models (GAMs). Combining the flexibility of LLMs with the breadth of statistical patterns accurately described by GAMs enables dataset summarization, question answering, and model critique. LLMs can also improve the interaction between domain experts and interpretable models, and generate hypotheses about the underlying phenomenon. We release https://github.com/interpretml/TalkToEBM as an open-source LLM-GAM interface.

Tasks

Reproductions