SOTAVerified

Counterfactual Instances Explain Little

2021-09-20Unverified0· sign in to hype

Adam White, Artur d'Avila Garcez

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide counterfactual instance explanations. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.

Tasks

Reproductions