SOTAVerified

How does task structure shape representations in deep neural networks?

2020-10-09NeurIPS Workshop SVRHM 2020Unverified0· sign in to hype

Kushin Mukherjee, Timothy T. Rogers

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

While modern deep convolutional neural networks can be trained to perform at human levels of object recognition and learn visual features in the process, humans use vision for a host of tasks beyond object recognition including — drawing, acting, and making propositional statements. To investigate the role of task structure on the learned representations in deep networks we trained separate models to perform two tasks that are simple for humans — imagery and sketching. Both models encoded a bitmap image with the same encoder architecture but used either a deconvolutional decoder for the imagery task or an LSTM sequence decoder for the sketching task. We find that while both models learn to perform their respective tasks well, the sketcher model learns representations that can be better decoded to provide visual information about an input including — shape, location, and semantic category highlighting the importance of output task modality in learning robust visual representations.

Tasks

Reproductions