SOTAVerified

Associative Conversation Model: Generating Visual Information from Textual Information

2018-01-01ICLR 2018Unverified0· sign in to hype

Yoichi Ishibashi, Hisashi Miyamori

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.

Tasks

Reproductions