SOTAVerified

Compositional Generalization in Grounded Language Learning via Induced Model Sparsity

2022-07-06NAACL (ACL) 2022Code Available0· sign in to hype

Sam Spilsbury, Alexander Ilin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We provide a study of how induced model sparsity can help achieve compositional generalization and better sample efficiency in grounded language learning problems. We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations. We show that standard neural architectures do not always yield compositional generalization. To address this, we design an agent that contains a goal identification module that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal. The output of the goal identification module is the input to a value iteration network planner. Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations. We examine the internal representations of our agent and find the correct correspondences between words in its dictionary and attributes in the environment.

Tasks

Reproductions