SOTAVerified

Neural Self Talk: Image Understanding via Continuous Questioning and Answering

2015-12-10Unverified0· sign in to hype

Yezhou Yang, Yi Li, Cornelia Fermuller, Yiannis Aloimonos

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we consider the problem of continuously discovering image contents by actively asking image based questions and subsequently answering the questions being asked. The key components include a Visual Question Generation (VQG) module and a Visual Question Answering module, in which Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are used. Given a dataset that contains images, questions and their answers, both modules are trained at the same time, with the difference being VQG uses the images as input and the corresponding questions as output, while VQA uses images and questions as input and the corresponding answers as output. We evaluate the self talk process subjectively using Amazon Mechanical Turk, which show effectiveness of the proposed method.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO Visual Question Answering (VQA) real images 1.0 open endedMax(Yang,2015)BLEU-159.4Unverified
COCO Visual Question Answering (VQA) real images 1.0 open endedSample(Yang,2015)BLEU-138.8Unverified

Reproductions