SOTAVerified

Where To Look: Focus Regions for Visual Question Answering

2015-11-23CVPR 2016Unverified0· sign in to hype

Kevin J. Shih, Saurabh Singh, Derek Hoiem

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method exhibits significant improvements in answering questions such as "what color," where it is necessary to evaluate a specific location, and "what room," where it selectively identifies informative image regions. Our model is tested on the VQA dataset which is the largest human-annotated visual question answering dataset to our knowledge.

Tasks

Reproductions