Natural Language Generation Challenges for Explainable AI
2019-11-20WS 2019Unverified0· sign in to hype
Ehud Reiter
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific NLG for XAI research challenges.