SOTAVerified

Probing Schema Linking Information from Pre-trained Language Models for Text-to-SQL Parsing

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The importance of building text-to-SQL parsers which can be applied to new databases has long been acknowledged, and a critical step to achieve this goal is schema linking, i.e., properly recognizing mentions of unseen columns or tables when generating SQLs. In this work, we propose a novel framework to elicit relational structures from large-scale pre-trained language models (PLMs) via a probing procedure based on Poincaré distance metric, and use the induced relations to augment current graph-based parsers for better schema linking. Compared with commonly-used rule-based methods for schema linking, we found that probing relations can robustly capture semantic correspondences, even when surface forms of mentions and entities differ. Moreover, our probing procedure is entirely unsupervised and requires no additional parameters. Extensive experiments show that our framework outperforms strong baselines on three benchmarks, and sets new state-of-the-art performance on two of them. We empirically verify that our probing procedure can indeed find desired relational structures through qualitative analysis. For reproducibility, we will release our code and data upon the publication of this paper.

Tasks

Reproductions