AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data
Silei Xu, Sina J. Semnani, Giovanni Campagna, Monica S. Lam
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/stanford-oval/genie-toolkitOfficialpytorch★ 204
- github.com/stanford-oval/genienlpOfficialpytorch★ 90
- github.com/stanford-oval/schema2qaOfficialnone★ 19
Abstract
We propose AutoQA, a methodology and toolkit to generate semantic parsers that answer questions on databases, with no manual effort. Given a database schema and its data, AutoQA automatically generates a large set of high-quality questions for training that covers different database operations. It uses automatic paraphrasing combined with template-based parsing to find alternative expressions of an attribute in different parts of speech. It also uses a novel filtered auto-paraphraser to generate correct paraphrases of entire sentences. We apply AutoQA to the Schema2QA dataset and obtain an average logical form accuracy of 62.9% when tested on natural questions, which is only 6.4% lower than a model trained with expert natural language annotations and paraphrase data collected from crowdworkers. To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset. AutoQA achieves 69.8% answer accuracy, 16.4% higher than the state-of-the-art zero-shot models and only 5.2% lower than the same model trained with human data.