SOTAVerified

Exploiting Attention to Reveal Shortcomings in Memory Models

2018-11-01WS 2018Unverified0· sign in to hype

Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Tom Griffiths

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments. We equate success on questions to task classification, which explains not only test-time failures but also how well the model generalizes to new training conditions.

Tasks

Reproductions