SOTAVerified

Tackling Hallucinations in Neural Chart Summarization

2023-08-01Code Available0· sign in to hype

Saad Obaid ul Islam, Iza Škrjanec, Ondřej Dušek, Vera Demberg

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we tackle the problem of hallucinations in neural chart summarization. Our analysis shows that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and show through human evaluation that our method significantly reduces hallucinations. We also found that shortening long-distance dependencies in the input sequence and adding chart-related information like title and legends improves the overall performance.

Tasks

Reproductions