SOTAVerified

Visual Question Answering using Deep Learning: A Survey and Performance Analysis

2019-08-27Code Available0· sign in to hype

Yash Srivastava, Vaishnav Murali, Shiv Ram Dubey, Snehasis Mukherjee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The Visual Question Answering (VQA) task combines challenges for processing data with both Visual and Linguistic processing, to answer basic `common sense' questions about given images. Given an image and a question in natural language, the VQA system tries to find the correct answer to it using visual elements of the image and inference gathered from textual questions. In this survey, we cover and discuss the recent datasets released in the VQA domain dealing with various types of question-formats and robustness of the machine-learning models. Next, we discuss about new deep learning models that have shown promising results over the VQA datasets. At the end, we present and discuss some of the results computed by us over the vanilla VQA model, Stacked Attention Network and the VQA Challenge 2017 winner model. We also provide the detailed analysis along with the challenges and future research directions.

Tasks

Reproductions