SOTAVerified

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

2018-10-04NeurIPS 2018Code Available0· sign in to hype

Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, Joshua B. Tenenbaum

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our neural-symbolic visual question answering (NS-VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three unique advantages. First, executing programs on a symbolic space is more robust to long program traces; our model can solve complex reasoning tasks better, achieving an accuracy of 99.8% on the CLEVR dataset. Second, the model is more data- and memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CLEVRNS-VQA (1K programs)Accuracy99.8Unverified
CLEVR-HumansNS-VQA (1K programs)Accuracy67.8Unverified

Reproductions