The Natural Language Pipeline, Neural Text Generation and Explainability
2020-11-01ACL (NL4XAI, INLG) 2020Unverified0· sign in to hype
Juliette Faille, Albert Gatt, Claire Gardent
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
End-to-end encoder-decoder approaches to data-to-text generation are often black boxes whose predictions are difficult to explain. Breaking up the end-to-end model into sub-modules is a natural way to address this problem. The traditional pre-neural Natural Language Generation (NLG) pipeline provides a framework for breaking up the end-to-end encoder-decoder. We survey recent papers that integrate traditional NLG submodules in neural approaches and analyse their explainability. Our survey is a first step towards building explainable neural NLG models.