Knowing What You Cannot Explain: Learning to Reject Low-Quality Explanations
Luca Stradiotti, Dario Pesenti, Stefano Teso, Jesse Davis
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Learning to Reject (LtR) frameworks allow ML models to abstain from uncertain predictions and promote user trust. However, since current LtR strategies focus solely on predictive performance, they completely neglect explanation quality. Low-quality explanations -- whether they inaccurately reflect the model's reasoning or fail to satisfy users -- can severely compromise trust assessments and induce over-reliance on incorrect predictions. We argue that models should abstain from making a prediction when they cannot offer a satisfactory explanation for it and introduce a framework for learning to reject low-quality explanations (LtX) in which predictors are equipped with a rejector that evaluates the explanation quality. Focusing on popular attribution techniques, we propose REX (REjector of low-quality eXplanations), which learns a rejector from explanation quality labels combining machine-side judgments with explicit human annotations to assess explanation quality. Our empirical evaluation demonstrates that outperforms popular LtR strategies and baselines relying on isolated explanation metrics. Finally, to support future research, we publicly release a novel, larger-scale dataset of 1050 human-annotated machine explanations.