SOTAVerified

A Deep Generative XAI Framework for Natural Language Inference Explanations Generation

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Explainable artificial intelligence with natural language explanations (Natural-XAI) aims to produce human-readable explanations as evidence for AI decision-making. This evidence can enhance human trust and understanding of AI systems and contribute to AI explainability and transparency. However, the current approaches focus on single explanation generation only. In this paper, we conduct experiments with the state-of-the-art Transformer architecture and explore multiple explanations generation using a public benchmark dataset, e-SNLI camburu2018snli. We propose a novel deep generative Natural-XAI framework: INITIATIVE, standing for expla In aNd predIcT wIth contextuAl condiTIonal Variational autoEncoder for generating natural language explanations and making a prediction at the same time. Our method achieves competitive or better performance against the state-of-the-art baseline models on generation (4.7\% improvement in the BLEU score) and prediction (4.4\% improvement in accuracy) tasks. Our work can serve as a solid deep generative model baseline for future Natural-XAI research. Our code will be publicly available on GitHub upon paper acceptance.

Tasks

Reproductions