SOTAVerified

Argumentation-based Agents that Explain their Decisions

2020-09-13Unverified0· sign in to hype

Mariela Morveli-Espinoza, Ayslan Possebom, Cesar Augusto Tacla

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions, behaviours and reasoning that produce their choices to the humans (or other systems) with which they interact. In this paper, we focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning, specifically, about the goals he decides to commit to. Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision and use argumentation semantics to determine acceptable arguments (reasons). We propose two types of explanations: the partial one and the complete one. We apply our proposal to a scenario of rescue robots.

Tasks

Reproductions