SOTAVerified

Towards making NLG a voice for interpretable Machine Learning

2018-11-01WS 2018Unverified0· sign in to hype

James Forrest, Somayajulu Sripada, Wei Pang, George Coghill

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework called LIME. Our study shows that self-reported rating of NLG explanation was higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.

Tasks

Reproductions