SOTAVerified

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

2021-12-12Findings (NAACL) 2022Code Available1· sign in to hype

Tomer Wolfson, Daniel Deutch, Jonathan Berant

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL) utterances with corresponding SQL queries. In this work, we propose a weak supervision approach for training text-to-SQL parsers. We take advantage of the recently proposed question meaning representation called QDMR, an intermediate between NL and formal query languages. Given questions, their QDMR structures (annotated by non-experts or automatically predicted), and the answers, we are able to automatically synthesize SQL queries that are used to train text-to-SQL models. We test our approach by experimenting on five benchmark datasets. Our results show that the weakly supervised models perform competitively with those trained on annotated NL-SQL data. Overall, we effectively train text-to-SQL parsers, while using zero SQL annotations.

Tasks

Reproductions