SOTAVerified

Informative Semi-Factuals for XAI: The Elaborated Explanations that People Prefer

2026-03-18Unverified0· sign in to hype

Saugat Aryal, Mark T. Keane

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, in eXplainable AI (XAI), even if explanations -- so-called semi-factuals -- have emerged as a popular strategy that explains how a predicted outcome can remain the same even when certain input-features are altered. For example, in the commonly-used banking app scenario, a semi-factual explanation could inform customers about better options, other alternatives for their successful application, by saying "Even if you asked for double the loan amount, you would still be accepted". Most semi-factuals XAI algorithms focus on finding maximal value-changes to a single key-feature that do not alter the outcome (unlike counterfactual explanations that often find minimal value-changes to several features that alter the outcome). However, no current semi-factual method explains why these extreme value-changes do not alter outcomes; for example, a more informative semi-factual could tell the customer that it is their good credit score that allows them to borrow double their requested loan. In this work, we advance a new algorithm -- the informative semi-factuals (ISF) method -- that generates more elaborated explanations supplementing semi-factuals with information about additional hidden features that influence an automated decision. Experimental results on benchmark datasets show that this ISF method computes semi-factuals that are both informative and of high-quality on key metrics. Furthermore, a user study shows that people prefer these elaborated explanations over the simpler semi-factual explanations generated by current methods.

Reproductions