SOTAVerified

Learning Semantic Correspondences with Less Supervision

2009-08-01Code Available0· sign in to hype

Percy Liang, Michael Jordan, Dan Klein

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps.

Tasks

Reproductions