SOTAVerified

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expert annotation of natural language (NL) utterances paired with corresponding SQL queries.In this work, we propose a weak supervision approach for training text-to-SQL parsers. We take advantage of the recently proposed question meaning representation called QDMR, an intermediate between NL and formal query languages.We show that given questions, their QDMR structures (annotated by non-experts or automatically predicted), and the answers, we can automatically synthesize SQL queries that are then used to train text-to-SQL models. Extensive experiments test our approach on five benchmark datasets. The results show that our models perform competitively with those trained on annotated NL-SQL data.Overall, we effectively train text-to-SQL parsers, using zero SQL annotations.

Tasks

Reproductions