SOTAVerified

Simpler Context-Dependent Logical Forms via Model Projections

2016-06-16ACL 2016Code Available0· sign in to hype

Reginald Long, Panupong Pasupat, Percy Liang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.

Tasks

Reproductions