SOTAVerified

Referring Expression Generation in Visually Grounded Dialogue with Discourse-aware Comprehension Guiding

2024-09-09INLG 2024Code Available0· sign in to hype

Bram Willemsen, Gabriel Skantze

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose an approach to referring expression generation (REG) in visually grounded dialogue that is meant to produce referring expressions (REs) that are both discriminative and discourse-appropriate. Our method constitutes a two-stage process. First, we model REG as a text- and image-conditioned next-token prediction task. REs are autoregressively generated based on their preceding linguistic context and a visual representation of the referent. Second, we propose the use of discourse-aware comprehension guiding as part of a generate-and-rerank strategy through which candidate REs generated with our REG model are reranked based on their discourse-dependent discriminatory power. Results from our human evaluation indicate that our proposed two-stage approach is effective in producing discriminative REs, with higher performance in terms of text-image retrieval accuracy for reranked REs compared to those generated using greedy decoding.

Tasks

Reproductions