SOTAVerified

T-FIX: Text-Based Explanations with Features Interpretable to eXperts

2026-03-16Unverified0· sign in to hype

Shreya Havaldar, Weiqiu You, Chaehyeon Kim, Anton Xue, Helen Jin, Marco Gatti, Bhuvnesh Jain, Helen Qu, Amin Madani, Daniel A. Hashimoto, Gary E. Weissman, Rajat Deo, Sameed Khatana, Lyle Ungar, Eric Wong

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As LLMs are deployed in knowledge-intensive settings (e.g., surgery, astronomy, therapy), users are often domain experts who expect not just answers, but explanations that mirror professional reasoning. However, most automatic evaluations of explanations prioritize plausibility or faithfulness, rather than testing whether an LLM thinks like an expert. Existing approaches to evaluating professional reasoning rely heavily on per-example expert annotation, making such evaluations costly and difficult to scale. To address this gap, we introduce the T-FIX benchmark, spanning seven scientific tasks across three domains, to operationalize expert alignment as a desired attribute of LLM-generation explanations. Our framework enables automatic evaluation of expert alignment, generalizing to unseen explanations and eliminating the need for ongoing expert involvement.

Reproductions